Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Languages

202 Articles
article-image-openjdk-project-valhalla-is-now-in-phase-iii
Prasad Ramesh
10 Oct 2018
3 min read
Save for later

OpenJDK Project Valhalla is now in Phase III

Prasad Ramesh
10 Oct 2018
3 min read
Project Valhalla is an OpenJDK project started in 2014 in an experimental stage. It is headed by Oracle Java language architect Brian Goetz and supported by the HotSpot group. The project was created for introducing value-based optimizations to JDK 10 and above. The goal of Project Valhalla is explore and support development of advanced Java VM and language features like, value types, generic specialization, and variable handles. The Project Valhalla members met last week at Burlington MA to discuss in detail about the current project status and future plans. Goetz notes that it was a very productive meeting with members either attending the venue in person or connecting via calls. After over four years of the project, the members decided to meet as it seemed like a good time to assess the project. Goetz states: “And, like all worthwhile projects, we hadn't realized quite how much we had bitten off.  (This is good, because if we had, we'd probably have given up.)” This meeting indicates the initiation of Phase III project Valhalla. Phase I focused on language and libraries. Trying to figure out what exactly a clean migration to value types and specialized generics would look like. This included steps to migrate core APIs like Collections and Streams, and understanding the limitations of the current VM. This enabled a vision for the VM that was needed. Phase I produced three prototypes, Models 1-3. The exploration areas of these models included specialization mechanics (M1), handling of wildcards (M2) and classfile representations for specialization and erasure (M3). At this point, the list of VM requirements became too long and they had to take a different approach. Phase II took on the problem from the VM up, with two additional rounds of prototypes namely MVT and LW1. LW1 was a risky experiment; sharing the L-carrier and a* bytecodes between references and values while not losing performance. If this could be achieved, many of the problems from Phase I could go away.  This was successful and now they have a richer base for further work. The next target is L2, which will capture the choices made so far, provide a useful testbed for doing library experiments, and set the stage for tackle remaining open questions between now and L10.  L10 is the target for a first preview, which eventually should support value types and erased generics over values. For more information, you can read the mail on Project Valhalla mailing list. JDK 12 is all set for public release in March 2019 State of OpenJDK: Past, Present and Future with Oracle No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 4322

article-image-net-team-announces-ml-net-0-6
Savia Lobo
10 Oct 2018
3 min read
Save for later

.NET team announces ML.NET 0.6

Savia Lobo
10 Oct 2018
3 min read
On Monday, .NET engineering team announced the latest monthly release of their cross-platform, open source machine learning framework for .NET developers, ML.NET 0.6. Some of the exciting features in this release include new API for building and using machine learning models, performance improvements, and much more. Improvements in the ML.NET 0.6 A new LearningPipeline API for building ML model The new API is more flexible and enables new tasks and code workflow that weren’t possible with the previous LearningPipeline API. The team further plans to deprecate the current LearningPipeline API. This new API is designed to support a wider set of scenarios. It closely follows ML principles and naming from other popular ML related frameworks like Apache Spark and Scikit-Learn. Know more about the new ML.NET API, visit the Microsoft blog. Ability to get predictions from pre-trained ONNX Models ONNX, an open and interoperable model format enables using models trained in one framework (such as scikit-learn, TensorFlow, xgboost, and so on) and use them in another (ML.NET). ML.NET 0.6 includes support for getting predictions from ONNX models. This is done by using a new transformer and runtime for scoring ONNX models. There are a large variety of ONNX models created and trained in multiple frameworks that can export models to ONNX format. Those models can be used for tasks like image classification, emotion recognition, and object detection. The ONNX transformer in ML.NET provides some data to an existing ONNX model and gets the score (prediction) from it. Performance improvements In the ML.NET 0.6 release, there are made several performance improvements in making single predictions from a trained model. Two improvements include: Moving the legacy LearningPipeline API to the new Estimators API. Optimizing the performance of PredictionFunction in the new API. Following are some comparisons of the LearningPipeline with the improved PredictionFunction in the new Estimators API: Predictions on Iris data: 3,272x speedup (29x speedup with the Estimators API, with a further 112x speedup with improvements to PredictionFunction). Predictions on Sentiment data: 198x speedup (22.8x speedup with the Estimators API, with a further 8.68x speedup with improvements to PredictionFunction). This model contains a text featurizer, so it is not surprising to see a smaller gain. Predictions on Breast Cancer data: 6,541x speedup (59.7x speedup with the Estimators API, with a further 109x speedup with improvements to PredictionFunction). Improvements in Type system In this ML.NET version, the Dv type system has been replaced with .NET’s standard type system. This makes ML.NET easy to use. ML.NET previously had its own type system, which helped it deal with missing values (a common case in ML). This type system required users to work with types like DvText, DvBool, DvInt4, etc. One effect of this change is, only floats and doubles have missing values which are represented by NaN. Due to the improved approach to dependency injection, users can also deploy ML.NET in additional scenarios using .NET app models such as Azure Functions easily without convoluted workarounds. To know more about other improvements in the ML.NET 0.6 visit the Microsoft Blog. Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit .NET Core 3.0 and .NET Framework 4.8 more details announced
Read more
  • 0
  • 0
  • 27359

article-image-clojure-1-10-0-beta1-is-out
Bhagyashree R
08 Oct 2018
3 min read
Save for later

Clojure 1.10.0-beta1 is out!

Bhagyashree R
08 Oct 2018
3 min read
On October 6, the release of Clojure 1.10.0-beta1 was announced. With this release, Clojure 1.10 will now be considered feature complete and only critical bug fixes will be addressed. Changes introduced in Clojure 1.10 Detecting error phase Clojure errors can occur in five distinct phases, which include read, macroexpand, compile, eval, and print. Clojure and the REPL can now identify these phases in the exception and/or the message. The read/macroexpand/compile phases produce a CompilerException and indicate the location in the caller source code where the problem occurred. CompilerException now implements IExceptionInfo and ex-data reports exception data including the optional keys: :clojure.error/source: Name of the source file :clojure.error/line: Line in source file :clojure.error/column: Column of line in source file :clojure.error/phase: This indicates the phase (:read, :macroexpand, :compile) :clojure.error/symbol - Symbol being macroexpanded or compiled Also, clojure.main now contains a new function called ex-str that can be used by external tools to get a repl message for a CompilerException to match the clojure.main repl behavior. Introducing tap tap, a shared and globally accessible system, is used for distributing a series of informational or diagnostic values to a set of handler functions. It acts as a better debug prn and can also be used for facilities like logging. Read string capture mode A new function, read+string is added that not only mimics read, but also captures the string that is read. It then returns both the read value and the (whitespace-trimmed) read string. prepl (alpha) This is a new stream-based REPL with a structured output. These are the new functions that are added in clojure.core.server: prepl: It is a REPL with structured output (for programs). io-prepl: A prepl bound to *in* and *out* suitable for use with the Clojure socket server. remote-prepl: A prepl that can be connected to a remote prepl over a socket. prepl is now alpha and subject to change. Java 8 or above required Clojure 1.10 now requires Java 8 or above. The following are few of the updates related to this change and Java compatibility fixes for Java 8, 9, 10, and 11: Java 8 is now the minimum requirement for Clojure 1.10 Embedded ASM is updated to 6.2 Reliance on jdk166 jar is removed ASM regression is fixed Invalid bytecode generation for static interface method calls in Java 9+ is now fixed Reflection fallback for --illegal-access warnings in Java 9+ is added Brittle test that fails on Java 10 build due to serialization drift is fixed Type hint is added to address reflection ambiguity in JDK 11 Other new functions in core To increase the portability of the error-handling code, the following functions have been added: ex-cause: To extract the cause exception ex-message: To extract the cause message To know more about the changes in Clojure 1.10, check out its GitHub repository. Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Java 11 is here with TLS 1.3, Unicode 11, and more updates Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 4037
Banner background image

article-image-stack-exchange-migrates-to-net-entity-framework-core-ef-core-stack-overflow-to-follow-soon
Savia Lobo
08 Oct 2018
2 min read
Save for later

Stack Exchange migrates to .NET Entity Framework Core (EF Core), Stack Overflow to follow soon

Savia Lobo
08 Oct 2018
2 min read
Last week, Nick Craver, Architecture Lead for Stack Overflow, announced that Stack Exchange is migrating to .NET Entity Framework Core (EF Core) and seek help from users to test the EF Core. The Stack Exchange community has deployed a major migration from its previous Linq-2-SQL to EF Core. Following this, Stack Overflow may also get a partial tier to deploy later today. In his post, Nick said, “Along the way we have to swap out parts that existed in the old .NET world but don't in the new.” Some changes in Stack Exchange and Stack Overflow post migration to .NET EF Core The Stack community said that they have safely diverged their Enterprise Q3 release. This means they work on one codebase for easier maintenance and the latest features will also be reflected in the .NET Entity Framework Core. Stack Overflow was written on top of a data layer called Linq-2-SQL. This worked well but had scaling issues following which the community replaced the performance critical paths with a library named as Dapper. However, the community said that until today, some old paths, mainly where they insert entries, remained on Linq-2-SQL. The community also stated that as a part of the migration, a few code paths went to Dapper instead of EF Core. This means Dapper wasn’t removed and still exists post migration. This migration may affect posts, comments, users, and other ‘primary’ object types in Q&A. Nick also added, “We're not asking for a lot of test data to be created on meta here, but if you see something, please say something!”. He further added, “The biggest fear with a change like this is any chance of bad data entering the database, so while we've tested this extensively and have done a few tests deploys already, we're still being extra cautious with such a central & critical change.” To know more about this in detail, head over to Nick Craver’s discussion thread on Stack Exchange. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Stack Overflow celebrates its 10th birthday as the most trusted developer community
Read more
  • 0
  • 0
  • 4920

article-image-kotlinconf-2018-kotlin-1-3-rc-out-and-kotlin-native-hits-beta
Prasad Ramesh
05 Oct 2018
2 min read
Save for later

KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta

Prasad Ramesh
05 Oct 2018
2 min read
Day 2 of the Kotlin Conference 2018 just ended yesterday and announcements were made regarding the programming language. There is one more day of the conference which will be streamed live at the Kotlin website. We will look at some of the announcements made in the conference so far. Kotlin 1.3 is now RC Kotlin 1.3 RC is here and brings a lot of new features. Some of them are the following. Contracts The Kotlin compiler now does extensive static analysis to show warnings and reduce boilerplate. ‘Contracts’ in Kotlin 1.3 allow functions to explicitly describe its own behavior in a way which is understood by the compiler. Coroutines Kotlin Coroutines are no longer experimental and will be supported like other features starting from Kotlin version 1.3. Coroutines delegate most of the functionality to libraries and helps in providing a fluid experience that is scalable when needed. Multiplatform projects The multiplatform projects model has been reworked to improve expressiveness and flexibility. It is in line with the language’s goal to function on all platforms. Currently Kotlin supports JVM, Android, JavaScript, iOS, Linux, Windows, Mac and embedded systems like STM32. This is beneficial for reusing code. Kotlin/Native is now in beta Kotlin/Native is designed to enable compilation in platforms where virtual machines do not work. An example would be embedded devices or iOS. Kotlin/Native is a solution to situations when developers need to produce a self-contained program where an additional runtime or virtual machine is not required. After several years of development, Kotlin/Native is now beta. The Kotlin foundation The Kotlin Foundation is a nonprofit nonstock corporation created in 2018. It has backing from JetBrains and Google. The Kotlin Foundation aims to protect, promote and advance the development of Kotlin. New revamped playground The online environment for trying and learning Kotlin has a new look, functionality, and a new section called Learn Kotline by Example. All this is available directly in your web browser via the Kotlin Playground website. The first day can be watched on YouTube. You can watch the Kotlin Conference live at their website. Kotlin 1.3 RC1 is here with compiler and IDE improvements How to implement immutability functions in Kotlin [Tutorial] Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 3885

article-image-facebook-releases-skiplang-a-general-purpose-programming-language
Prasad Ramesh
01 Oct 2018
2 min read
Save for later

Facebook releases Skiplang, a general purpose programming language

Prasad Ramesh
01 Oct 2018
2 min read
Facebook released Skip or Skiplang last week, a language it has been developing since 2015. It is a general-purpose programming language that provides caching with features like reactive invalidation, safe parallelism, and efficient garbage collection. Skiplang features Skiplang's primary goal is to explore language and runtime support for correct, efficient memoization-based caching and cache invalidation. It achieves this via a static type system that carefully tracks mutability. The language is typed statically and compiled ahead-of-time using LLVM to produce executables that are highly optimized. Caching with reactive invalidation The main new language feature in Skiplang is its precise tracking of side effects. It includes both mutability of values and distinguishing between non-deterministic data sources. This distinguishing includes data sources that can provide reactive invalidations that tell Skiplang when data has changed. Safe parallelism Skiplang has support for two complementary forms of concurrent programming. Both forms avoid the usual thread safety issues due to the language's tracking of side effects. This language also supports ergonomic asynchronous computation with async/await syntax. Asynchronous computations cannot refer to mutable state and are therefore safe to execute in parallel allowing independent async continuations to continue in parallel. Skiplang also has APIs for direct parallel computation, again using its tracking of side effects it prevents thread safety issues like shared access to mutable state. An efficient and predictable garbage collector Skiplang’s approach to memory management combines aspects of typical garbage collectors with more straightforward linear allocation schemes. The garbage collector only has to scan the memory that is reachable from the root of a computation. This allows developers to write code with predictable garbage collector overhead. A hybrid functional object-oriented language Skiplang is a mix of ideas from functional and object-oriented styles. They are all carefully integrated to form a cohesive language. Like other functional languages, it is expression-oriented and supports features like abstract data types, pattern matching, easy lambdas, higher-order functions, and enforcing pure/referentially-transparent API boundaries (optional). Like OOP languages, it supports classes with inheritance, mutable objects, loops, and early returns. In addition to these, Skiplang also incorporates ideas from “systems” languages supporting low-overhead abstractions, and compact memory layout of objects. Know more about the language from the Skiplang website and their GitHub repository. JDK 12 is all set for public release in March 2019 Python comes third in TIOBE popularity index for the first time Michael Barr releases embedded C coding standards
Read more
  • 0
  • 0
  • 4651
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-typescript-3-1-releases-with-typesversions-redirects-mapped-tuple-types
Bhagyashree R
28 Sep 2018
3 min read
Save for later

TypeScript 3.1 releases with typesVersions redirects, mapped tuple types

Bhagyashree R
28 Sep 2018
3 min read
After announcing TypeScript 3.1 RC version last week, Microsoft released TypeScript 3.1 as a stable version, yesterday. This release comes with support for mapped array and tuple types, easier properties on function declarations, typesVersions for version redirects, and more. Support for mapped array and tuple types TypeScript has a concept called ‘mapped object type’ which can generate new types out of existing ones. Instead of introducing a new concept for mapping over a tuple, mapped object types now just “do the right thing” when iterating over tuples and arrays. This means that if you are using the existing mapped types like Partial or Required from lib.d.ts, they will now also automatically work on tuples and arrays. This change will eliminate the need to write a ton of overrides. Properties on function declarations For any function or const declaration that’s initialized with a function, the type-checker will analyze the containing scope to track any added properties. This enables users to write canonical JavaScript code without resorting to namespace hacks. Additionally, this approach for property declarations allows users to express common patterns like defaultProps and propTypes on React stateless function components (SFCs). Introducing typesVersions for version redirects Users are always excited to use new type system features in their programs or definition files. However, for the library maintainers, this creates a difficult situation where they are forced to choose between supporting new TypeScript features and not breaking its older versions. To solve this, TypeScript 3.1 introduces a new feature called typesVersions. When TypeScript opens a package.json file to figure out which files it needs to read, it will first look for the typesVersions field. The field will tell TypeScript to check which version of TypeScript is running. If the version in use is 3.1 or later, it figures out the path you've imported relative to the package and reads from the package's ts3.1 folder. Refactor from .then() to await With this new refactoring, you can now easily convert functions that return promises constructed with chains of .then() and .catch() calls to async functions that uses await. Breaking changes Vendor-specific declarations removed: TypeScript's built-in .d.ts library and other built-in declaration file libraries are partially generated using Web IDL files provided from the WHATWG DOM specification. While this makes keeping lib.d.ts easier, many vendor-specific types have been removed. Differences in narrowing functions: Using the typeof foo === "function" type guard may provide different results when intersecting with relatively questionable union types composed of {}, Object, or unconstrained generics. How to install this latest version? You can get the latest version through NuGet or via npm by running: npm install -g typescript According to their roadmap, TypeScript 3.2 is scheduled to be released in November with strictly-typed call/bind/apply on function types. To read the full list of updates, check their official announcement on MSDN. TypeScript 3.1 RC released TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more How to work with classes in Typescript
Read more
  • 0
  • 0
  • 2710

article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 4160

article-image-nim-0-19-a-statically-typed-and-compiled-language-is-out-with-nimble-0-9-0-support
Bhagyashree R
28 Sep 2018
3 min read
Save for later

Nim 0.19, a statically typed and compiled language, is out with Nimble 0.9.0 support

Bhagyashree R
28 Sep 2018
3 min read
Earlier this week, the Nim team announced the release of Nim 0.19 with many language changes, async improvements, and support for the latest Nimble 0.9.0. Nim is a systems and applications programming language, which aims for better performance, portability, and expressiveness. It is a statically typed and compiled language which comes with unparalleled performance in an elegant package. Its common features include: High-performance garbage-collection Compiles to C, C++ or JavaScript Runs on Windows, macOS, Linux What’s new in Nim 0.19? Language changes and additions The nil state for strings/seqs is no more supported and their default value is changed to  "" / @[]. In the transition period you can use --nilseqs:on. It is now invalid to access the binary zero terminator in Nim’s native strings, but internally it can still have the trailing zero to support zero-copy interoperability with cstring. In the transition period you can compile your code using the new --laxStrings:on switch. Instead of being an all-or-nothing switch, experimental is now a pragma and a command line switch that can allow specific language extensions. You can make dot calls combined with explicit generic instantiations using the syntax x.y[:z], which is converted as y[z](x) by the parser. You can use func as an alias for proc {.noSideEffect.}. Nim now supports for-loop macros to make for loops and iterators more flexible to use. This feature enables a Python-like generic enumerate implementation. In order to implement pattern matching for certain types, case statements can be rewritten via macros. Keyword arguments after the comma are supported in the command syntax. Declaration of thread-local variables inside procs is now supported. This implies all the effects of the global pragma. Nim supports the except clause in the export statement. Async improvements Nim’s async macro now works completely with exception handling. The use of await in a try statement is also supported. Supports Nimble 0.9.0 This release comes with Nimble 0.9.0, which was released recently in August. This version contains a large number of fixes spread across 57 commits. One breaking change that you need to keep in mind is that any package that specifies a bin value in its .nimble file will no longer install any Nim source code files. Breaking changes The deprecated symbols in the standard library such as system.expr or the old type aliases starting with a T or P prefix have been removed. SystemError is now renamed to CatchableError and is the new base class for any exception that is guaranteed to be catchable. Read the full announcement on Nim’s official website. Rust as a Game Programming Language: Is it any good? Java 11 is here with TLS 1.3, Unicode 11, and more updates The 5 most popular programming languages in 2018
Read more
  • 0
  • 0
  • 1988

article-image-rust-2018-rc1-now-released-with-raw-identifiers-better-path-clarity-and-other-changes
Prasad Ramesh
21 Sep 2018
3 min read
Save for later

Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes

Prasad Ramesh
21 Sep 2018
3 min read
Rust 2018 RC1 was released yesterday. This new version of the Rust programming language contains features like raw identifiers, better path clarity and other additions. Some of the changes in Rust 2018 RC1 include: Raw identifiers Like many programming languages, Rust too has the concept of "keywords". These identifiers cannot be used in places like variable names, function names, and other places. With Rust 2018 RC1, raw identifiers let you use keywords where they are not allowed normally. New confirmed keywords in Rust 2018 RC1 are async, await, and try. Better path clarity One of the hardest things for people new to Rust is the module system. While there are simple and consistent rules defining the module system, their consequences can appear to be inconsistent and hard to understand. Rust 2018 RC1 introduces a few new module system features to simplify the module system and give a better picture of what is going on. extern crate is no longer needed. The crate keyword refers to the current crate. Absolute paths begin with a crate name, where again the keyword crate refers to the current crate. A foo.rs and foo/ subdirectory may coexist. mod.rs is no longer required when placing submodules in a subdirectory. Anonymous trait parameters are deprecated Parameters in trait method declarations are no longer allowed to be anonymous. In Rust 2015, the following was allowed: trait Foo { fn foo(&self, u8); } In Rust 2018 RC1, all parameters require an argument name (even if it's just _): trait Foo { fn foo(&self, baz: u8); } Non-lexical lifetimes The borrow checker has been enhanced to accept more code. This is performed via a mechanism called ‘non-lexical lifetimes’. Previously, the below code would have produced an error, but now it will compile just fine: fn main() { let mut x = 5; let y = &x; let z = &mut x; } Lifetimes follow "lexical scope". This means that the borrow from y is considered to be held until y goes out of scope at the end of main. This is the case even though y will never be used again in the code. The above code works fine, but in the older versions, the borrow checker was not able handle it. Installation To try and install Rust 2018 RC1 you need to install the Rust 1.30 beta toolchain. This beta is a little different from the normal beta, states the Rust Blog. > rustup install beta > rustc +beta --version rustc 1.30.0-beta.2 (7a0062e46 2018-09-19) The feature flags for Rust 2018 RC1 are turned on and can be used to report issues. These were only a select few changes. Other changes in this beta include Lifetime elison in impl, T: ‘a inference in structs, macro changes etc. For more information and details on the complete list of updates, read the Rust edition guide where the new features are marked as beta. Rust 1.29 is out with improvements to its package manager, Cargo Deno, an attempt to fix Node.js flaws, is rewritten in Rust Creating Macros in Rust [Tutorial]
Read more
  • 0
  • 0
  • 4013
article-image-low-js-a-node-js-port-for-embedded-systems
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

low.js, a Node.js port for embedded systems

Prasad Ramesh
17 Sep 2018
3 min read
Node.JS is a popular backend widely for web development despite some of its flaws. For embedded systems, now there is low.js, a Node.js port with far lower system requirements. In low.js you can program JavaScript applications by utilizing the full Node.js API. You can run these on regular computers and also on embedded devices, which are based on the $3 ESP32 microcontroller. The JavaScript V8 engine at the center of Node.js is replaced with Duktape. Duktape is an embeddable ECMAScript E5/E5.1 engine with a compact footprint. Some parts of the Node.js system library are rewritten for more compact footprint and use more native code. low.js currently uses under 2 MB of disk space with a minimum requirement of around 1.5 MB of RAM for the ESP32 version. low.js features low.js is good for hobbyists and people interested in electronics. It allows using Node.JS scripts on smaller devices like routers which are based on Linux or uClinux without using much of the resources. This is great for scripting especially if they communicate over the internet. The neonious one is a microcontroller board based on low.js for ESP32, which can be programmed in JavaScript ES 6 with the Node API. It includes Wifi, Ethernet, additional flash and an extra I/O controller. The lower systems requirements in low.js allow you to run it comfortably on the ESP32-WROVER module. The ESP32-WROVER costs under $3 for large orders and is a very cost effective solution for IoT devices requiring a microcontroller and Wifi. low.js for ESP32 also adds the additional benefit of fast software development and maintenance. Specialized software developers are not needed for the microcontroller software. How to install? The community edition of low.js can be run on POSIX based systems including Linux, uClinux, and Mac OS X. It is available on Github and currently ./configure is not present. You might need some programming skills and knowledge to get low.js up and running on your systems. The commands are as follows: git clone https://github.com/neonious/lowjs cd lowjs git submodule update --init --recursive make low.js for ESP32 is the same as the community edition, but adapted for the ESP32 microcontroller. This version is not open source and is pre-flashed on the neonious one. For more information and documentation visit the low.js website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust Node.js announces security updates for all their active release lines for August 2018 Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 10706

article-image-jdk-12-is-all-set-for-public-release-in-march-2019
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

JDK 12 is all set for public release in March 2019

Prasad Ramesh
17 Sep 2018
3 min read
With JDK 11 reaching general availability next week, there is also a proposed schedule released for JDK 12. The proposed schedule indicates a final release in March 2019 along with two JDK Enhancement Proposals (JEPs) proposed for JDK 12. Mark Reinhold, Chief Architect of the Java Platform Group at Oracle, made an announcement in a mail posted to the OpenJDK mailing list. Per the mail, JDK 12 should be out to the public on March 19, 2019. The proposed schedule for JDK 12 will be as follows: 13th December 2018 Rampdown Phase One 17th January 2019 Rampdown Phase Two 31st January 2019 Release-Candidate Phase 19th March 2019 General Availability JDK 11 had a total of 17 JEPs contributed out of which three were from the community, the highest number in any JDK release. The other 14 were from Oracle according to a tweet by @bellsoftware. For JDK 12, there are two JEPs integrated which will be available as a preview language feature and four candidate JEPs. JDK 12 preview features JEP 325: Switch Expressions (Preview) This JEP is going to allow the switch statement to be used as both statements and as an expression. Both forms can use either a “traditional” or “simplified” scoping and control flow behavior. The changes to the switch statement will simplify everyday coding. It will also pave the way for the use of pattern matching in switch. JEP 326: Raw String Literals (Preview) This JEP adds raw string literals to Java. A raw string literal can span many source code lines. It does not interpret escape sequences, such as \n, or Unicode escapes, of the form \uXXXX. This does not introduce any new String operators. There will be no change in the interpretation of traditional string literals. JDK 12 JEP candidates JEP 343: Packaging Tool To create a new tool based on the JavaFX javapackager tool for packaging self-contained Java applications. JEP 342: Limit Speculative Execution To help both developers and deployers to defend against speculative-execution vulnerabilities. This is to be done by providing a means to limit speculative execution and not a complete defense against all forms of speculative execution. JEP 340: One AArch64 Port, Not Two Remove all arm64 port related sources while retaining the 32-bit ARM port and the 64-bit AArch64 port. This will help focus on a single 64-bit ARM implementation and eliminate duplicate work to maintain two ports. JEP 341: Default CDS Archives Enhance the JDK build process to generate a class data-sharing (CDS) archive by using the default class list, on 64-bit platforms. The goal is to improve out-of-the-box startup time and eliminating the need for users to run -Xshare:dump to benefit from CDS. To know more details on the proposed schedule for JDK 12, visit the OpenJDK website. JEP 325: Revamped switch statements that can also be expressions proposed for Java 12 Mark Reinhold on the evolution of Java platform and OpenJDK No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 3717

article-image-rust-1-29-is-out-with-improvements-to-its-package-manager-cargo
Savia Lobo
14 Sep 2018
2 min read
Save for later

Rust 1.29 is out with improvements to its package manager, Cargo

Savia Lobo
14 Sep 2018
2 min read
Yesterday, the Rust team announced next version release of their systems programming language, Rust 1.29. Users using the previous versions of Rust installed via rustup, can easily get this latest version by a simple command: $ rustup update stable This Rust 1.29 release has lesser features. This is because the 1.29 cycle was spent preparing for the upcoming releases, Rust 1.30 and 1.31 that will have a lot more features in them. What’s new in Rust 1.29? This stable release of Rust has two most important improvements to Cargo, Rust’s package manager. The new improvements include cargo fix can automatically fix your code that has warnings cargo clippy is a bunch of lints to catch common mistakes and improve your Rust code cargo fix Rust 1.29 includes a new subcommand for Cargo known as cargo fix. This is the initial release of cargo fix that fixes a small number of warnings in the compiler. The compiler has an API for this, and it only suggests fixing lints that the team says, recommends correct code. Over time, cargo fix can be expanded based on user suggestions to automatically fix more warnings. cargo clippy Clippy has a large number of additional warnings that users can run against their Rust code. Users can now check out a preview of cargo clippy through Rustup, by the following command: $ rustup component add clippy-preview Clippy has not yet reached 1.0. However, its lints may change. The Rust team will release a clippy component once it has stabilized. Users can’t use clippy with cargo-fix yet, really. This is still in work in progress. Additional updates in Rust 1.29 The Rust version 1.29 also include some library stabilizations. The three APIs stabilized in this release are: Arc<T>::downcast Rc<T>::downcast Iterator::flatten Users can also compare &str and OsString. To have a detailed information on Rust 1.29, read its release notes. Deno, an attempt to fix Node.js flaws, is rewritten in Rust Working with Shared pointers in Rust: Challenges and Solutions [Tutorial] Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more  
Read more
  • 0
  • 0
  • 3295
article-image-python-serious-about-diversity-dumps-offensive-master-slave-terms-in-its-documentation
Natasha Mathur
13 Sep 2018
3 min read
Save for later

Python serious about diversity, dumps offensive ‘master’, ‘slave’ terms in its documentation

Natasha Mathur
13 Sep 2018
3 min read
Python is set on changing its “master” and “slave” terminology in its documentation and code. This is to conciliate the people claiming the terminology as offensive. A python developer at Red Hat, Victor Stinner, started a discussion titled “avoid master/slave terminology” on Python bug report, last week. The bug report discusses changing "master" and "slave" in Python documentation to terms such as "parent", "worker", or something similar, based on the complaints received “privately”. “For diversity reasons, it would be nice to try to avoid ‘master’ and ‘slave’ terminology which can be associated to slavery” mentioned Victor Stinner in the bug report. Not every Python developer who participated in this discussion agreed with Victor Stinner. One of the developers in the discussion, Larry Hastings, wrote “I'm a little surprised by this.  It's not like slavery was acceptable when these computer science terms were coined and it's only comparatively recently that they've gone out of fashion. On the other hand, there are some areas in computer software where "master" and "slave" are the exact technical terms (e.g. IDE), and avoiding them would lead to confusion”. Another Python developer, Terry J. Reedy wrote, “To me, there is nothing wrong with the word 'master', as such. I mastered Python to become a master of Python. Purging Python of 'master' seems ill-conceived. Like Larry, I object to action based on hidden evidence”. Python is not the only one who has been under Scrutiny. The Redis community, Django, and Drupal all faced the same issue. Drupal changed the terms "master" and "slave" for "primary" and "replica". Similarly, Django swapped "master" and "slave" for "leader" and "follower". To put an end to this debate about the use of this politically incorrect language, Guido Van Rossum, who resigned as “Benevolent dictator for life” or BDFL in July, but is still active as a core developer, was pulled back in. Guido ended the discussion by saying, “I'm closing this now. Three out of four of Victor's PRs have been merged. The fourth one should not be merged because it reflects the underlying terminology of UNIX ptys. There's a remaining quibble about "pliant children" -> "helpers" but that can be dealt with as a follow-up PR without keeping this discussion open”. The final commit on this is as follows: bpo-34605, pty: Avoid master/slave terms * pty.spawn(): rename master_read parameter to parent_read * Rename pty.slave_open() to pty.child_open(), but keep an pty.slave_open alis to pty.child_open for backward compatibility * os.openpty(), os.forkpty(): rename master_fd/slave_fd to parent_fd/child_fd * Rename internal variables: * Rename master_fd/slave_fd to parent_fd/child_fd * Rename slave_name to child_name For more information on the discussion, be sure to check out the official Python bug report. Why Guido van Rossum quit as the Python chief (BDFL) No new PEPS will be approved for Python in 2018, with BDFL election pending Python comes third in TIOBE popularity index for the first time
Read more
  • 0
  • 0
  • 5513

article-image-net-announcements-preview-2-of-net-core-2-2-and-entity-framework-core-2-2-c-7-3-and-ml-net-0-5
Savia Lobo
13 Sep 2018
5 min read
Save for later

.NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5

Savia Lobo
13 Sep 2018
5 min read
Yesterday, the .NET community announced the second preview of .NET Core 2.2 and the Entity Framework 2.2. They also released C# version 7.3 and ML.NET 0.5. Let’s have a look at the highlights and features of each of these announcements. .NET Core 2.2 Preview 2 .NET Core 2.2 Preview 2 can be used with Visual Studio 15.8, Visual Studio for Mac and Visual Studio Code. Following are two highlights of this release. Tiered Compilation Enabled The tiered compilation is enabled by default. The tiered compilation was available as part of the .NET Core 2.1 release. During that time, one had to enable tiered compilation via application configuration or an environment variable. It is now enabled by default and can be disabled, as needed. In the image below, the baseline is .NET Core 2.1 RTM, running in a default configuration, with tiered compilation disabled. The second scenario has tiered compilation. One can see a significant request-per-second (RPS) throughput benefit with tiered compilation enabled. The numbers in the chart are scaled so that baseline always measures 1.0. Such an approach makes it very easy to calculate performance changes as a percentage. The first two tests are TechEmpower benchmarks and the last one is Music Store, a frequent sample ASP.NET app. Platform Support .NET Core 2.2 is supported on the following operating systems: Windows Client: 7, 8.1, 10 (1607+) Windows Server: 2008 R2 SP1+ macOS: 10.12+ RHEL: 6+ Fedora: 27+ Ubuntu: 14.04+ Debian: 8+ SLES: 12+ openSUSE: 42.3+ Alpine: 3.7+ Read about the .NET Core 2.2 Preview 2 in detail, on Microsoft blog. Entity Framework Core 2.2 Preview 2 This preview includes a large number of bug fixes and two additional important previews, one is a data provider for Cosmos DB and the second one, new spatial extensions for .NET’s  SQL Server and in-memory providers. New EF Core provider for Cosmos DB This new provider enables developers (familiar with the EF programming model) to easily target Azure Cosmos DB as an application database. It also includes global distribution, elastic scalability, ‘always on’ availability, very low latency, and automatic indexing. Spatial extensions for SQL Server and in-memory This implementation picks the NetTopologySuite library that the PostgreSQL provider uses as the source of spatial .NET types. NetTopologySuite is a database-agnostic spatial library that implements standard spatial functionality using .NET idioms like properties and indexers. The extension then adds the ability to map and convert instances of these types to the column types supported by the underlying database, and usage of methods defined on these types in LINQ queries, to SQL functions supported by the underlying database. Read more about the Entity Framework Core 2.2 Preview 2 on the Microsoft blog. C# 7.3 C# 7.3 is the newest point release in the 7.0 family. Along with new compiler options, there are two main themes to the C# 7.3 release One provides features that enable safe code to be as performant as unsafe code. The second provides incremental improvements to existing features. New features that support the theme of better performance for safe code: Access to fixed fields without pinning. Easy Reassign ref local variables. Use of initializers on stackalloc arrays. Easy use of fixed statements with any type that supports a pattern. One can use additional generic constraints. The new compiler options in C# 7.3 are: -publicsign to enable Open Source Software (OSS) signing of assemblies. -pathmap to provide a mapping for source directories. Read more about the C# 7.3 in detail in its documentation notes. ML.NET 0.5 The .NET community released ML.NET version 0.5. ML.NET is a cross-platform, open source machine learning framework for .NET developers. This version release includes two key highlights: Addition of a TensorFlow model scoring transform (TensorFlowTransform) Starting from this version, the community plans to add support for Deep Learning in ML.NET. Following this, they introduced the TensorFlowTransform which enables taking an existing TensorFlow model, either trained by the user or downloaded from somewhere else, and get the scores from the TensorFlow model in ML.NET. This new TensorFlow scoring capability doesn’t require one to have a working knowledge of TensorFlow internal details. The implementation of this transform is based on code from TensorFlowSharp. One can simply add a reference to the ML.NET NuGet packages in thier .NET Core or .NET Framework apps. Under the covers, ML.NET includes and references the native TensorFlow library which allows writing code that loads an existing trained TensorFlow model file for scoring. New ML.NET API proposal exploration The new ML.NET API offers more flexible capabilities than what the current LearningPipeline API offers. The LearningPipeline API will be deprecated when this new API is ready. The new ML.NET API offers attractive features which aren’t possible with the current LearningPipeline API. These include: Strongly-typed API  takes advantage of C# capabilities. This helps errors to be discovered in compilation time along with improved Intellisense in the editors. Better flexibility: This new API provides a decomposable train and predict process, eliminating rigid and linear pipeline execution. Improved usability: This new API makes direct call to the APIs from user’s code. No more scaffolding or insolation layer creating an obscure separation between what the user/developer writes and the internal APIs. Entrypoints are no longer mandatory. Ability to simply score with TensorFlow models: One can also simply load a TensorFlow model and score by using it without needing to add any additional learner and training process. Better visibility of the transformed data: User’s have better visibility of the data while applying transformers. As this API inclusion will be a significant change in ML.NET, the community has started an open discussion where users can provide their feedback and help shape the long-term API for ML.NET. Users can share their feedback on ML.NET GitHub repo. Read more about ML.NET 0.5 in detail, on Microsoft blog. Task parallel library for easy multi-threading in .NET Core [Tutorial] Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] Microsoft’s .NET Core 2.1 now powers Bing.com  
Read more
  • 0
  • 0
  • 5140