Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon

Tech News - Programming

573 Articles
article-image-the-eclipse-foundation-releases-jakarta-ee-8-the-first-truly-open-source-vendor-neutral-java-ee
Bhagyashree R
11 Sep 2019
3 min read
Save for later

The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE

Bhagyashree R
11 Sep 2019
3 min read
Yesterday, the Eclipse Foundation announced the release of the Jakarta EE 8 full platform, web profile specifications, and related Technology Compatibility Kits (TCKs). This marks the completion of Java EE’s transition to an open and vendor-neutral evolution process. Explaining the vision behind this release, Mike Milinkovich, executive director of the Eclipse Foundation said, “There are tens of thousands of companies with strategic investments in Java EE and over 10 million Java developers globally. The finalization of the Jakarta EE 8 specifications means that the transition of Java EE to our new open, vendor-neutral, and community-based process has been completed, and paves the way for an entirely new era in Java innovation for enterprise and cloud workloads.” Back in 1999, Sun Microsystems developed Java EE under the name Java 2 Enterprise Edition (J2EE), which was rebranded as Java Platform, Enterprise Edition (Java EE) in 2006. When in 2010 Oracle acquired Sun Microsystems, Java EE’s governance and oversight also moved to Oracle. The development of Java EE’s technical specifications was managed under the Java Community Process (JCP), which was tightly vendor-led effort. In order to make Java EE more open, Oracle made the Eclipse Foundation the new steward of enterprise Java. Read also: Eclipse foundation releases updates on its Jakarta EE Rights to Java trademarks Updates in Jakarta EE 8 Jakarta EE 8 has shipped with the same set of technical specifications as Java EE 8, which means developers are not required to make any changes to their Java EE 8 applications or their use of existing APIs. In this release, the team has focused on updating the process used to determine new specs for Jakarta EE that will replace JCP. This new process is called the Jakarta EE Specification Process (JESP), which will be used by the Jakarta EE Working Group for further development of Jakarta EE. It is based on the Eclipse Foundation Specification Process (EFSP) with a few changes. Rhuan Rocha, a Java EE developer, wrote in the announcement, “The goals of JESP is being a process as lightweight as possible, with a design closer to open source development and with code-first development in mind. With this, this process promotes a new culture that focuses on experimentation to evolve these specification based on experiences gained with experimentation.” A key change in this process is that there is no Spec Lead now who had special intellectual property rights under the JCP. In an interview with JAXenter, Milinkovich explained how this process differs from JCP, “The Jakarta EE Specification Process is a level playing field in which all parties are equal, and collaboration is a must. Some of the other significant differences include a code-first approach, rather than a focus on specifications as the starting point. You can also expect a more fully open, collaborative approach to generating specifications, with every decision made collectively by the community.” Along with the release of Jakarta EE 8 specifications, the Eclipse Foundation also announced the certification of Eclipse GlassFish 5.1 as an open-source compatible implementation of the Jakarta EE 8 Platform. To know more in detail, check out the official announcement by the Eclipse Foundation. Other news in programming Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no” Core Python team confirms sunsetting Python 2 on January 1, 2020 Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more
Read more
  • 0
  • 0
  • 3710

article-image-microsoft-teams-rooms-gets-a-new-content-camera-feature-for-whiteboard-presentations
Amrata Joshi
10 Sep 2019
2 min read
Save for later

Microsoft Teams Rooms gets a new content camera feature for whiteboard presentations

Amrata Joshi
10 Sep 2019
2 min read
Last month, the team at Microsoft introduced content camera feature to the Microsoft Teams Rooms useful for meetings. With this feature, users can intelligently include a whiteboard for presentation in their Teams meeting.  https://twitter.com/randychapman/status/1169884205141987332 Microsoft Teams content camera uses Artificial Intelligence to detect, crop and frame the in-room traditional whiteboard and also share its content with the participants (in the meeting). Interestingly, the new feature makes the presenter standing in front of the whiteboard translucent so that remote participants can see the content right through them. https://youtu.be/1XvgH2rNpmk IT administrators can add certified content cameras to their USB ports in the Microsoft Teams Rooms systems. Once the content camera connects to the room, the admin can select the respective camera for input with the Device Settings menu. Currently, Crestron and Logitech cameras are available and certified for use with the Teams content camera functionality. The team at Microsoft has announced that they will be adding more cameras soon. Microsoft partners are also offering unique mounting systems so that users can fit their cameras into any meeting space. The company announced that ceiling tiles and digital signal processor (DSPs) options are also certified for use in the meeting rooms.  Users seem to be excited about this news, a user commented on HackerNews, “I don't see myself using this, but its really cool. The whole "see through presenter" thing is awesome. Somewhat unrelated, but it would be really cool to see that done using AR glasses.” https://twitter.com/AndrewMorpeth/status/1169907577905270784 https://twitter.com/ramsacDan/status/1170595795873292288 To know more about this news, check out the official post. Other interesting news in programming Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift  
Read more
  • 0
  • 0
  • 4701

article-image-oracle-introduces-patch-series-to-add-ebpf-support-for-gcc
Amrata Joshi
10 Sep 2019
4 min read
Save for later

Oracle introduces patch series to add eBPF support for GCC

Amrata Joshi
10 Sep 2019
4 min read
Yesterday, the team at Oracle introduced a patch series that brings a port of GCC to eBPF (extended Berkeley Packet Filter), a virtual machine that is placed in the Linux kernel. With the support for binutils (binary tools), this port can be used for developing compiled eBPF applications. eBPF was initially used for capturing user-level packet and filtering, it is now used to serve as a general-purpose infrastructure for non-networking purposes as well. Since May, Oracle has been planning on introducing an eBPF back-end to GCC 10 to make the GNU compiler target the general-purpose in-kernel virtual machine. Oracle’s inclination on bringing in the eBPF support for GCC is part of the company's efforts towards improving DTrace on Linux. As a compilation target, eBPF is different because of the restrictions imposed by the kernel verifier, and due to the security-driven design of the architecture. Currently, the back end issues an error whenever an eBPF restriction is violated.  This increases the chances of the resulting objects to become acceptable by the kernel verifier, hence shortening the development cycle. How will the patch series support GCC? The first patch in the series updates config.guess and config.sub from the 'config' upstream project to recognize bpf-*-* triplets.  The second one fixes an integrity check in the opt-functions.awk.  The third patch in the series annotates multiple tests in the gcc.c-torture/compile test suite.  While the fourth one introduces a new target flag named as indirect_call and annotates the tests in gcc.c-torture/compile.  The fifth patch in the series adds a new GCC port.  The sixth one adds a libgcc port for eBPF, currently, it addresses the limitations that are imposed by the target, by eliminating a few functions in libgcc2 whose default implementations surpass the eBPF stack limit. While the seventh, eighth and ninth patches are involved in dealing with testing the new port. The gcc.target testsuite has been extended with eBPF-specific tests that cover the backend-specific built-in functions as well as diagnostics.  The tenth one adds documentation updates including information related to the new command-line options and compiler built-ins to the GCC manual. Jose E. Marchesi, software engineer at Oracle writes, “Finally, the last patch adds myself as the maintainer of the BPF port. I personally commit to evolve and maintain the port for as long as necessary, and to find a suitable replacement in case I have to step down for whatever reason.” Other improvements expected in the port Currently, the port supports only a subset of C, in future, the team might add more languages as the eBPF kernel verifier gets smarter. Dynamic stack allocation (alloca and VLAs) is achieved by using a normal general register, %r9, as a pseudo stack pointer. But it has a disadvantage that makes the register "fixed" and therefore not available for general register allocation.   The team is planning to bring more additions to the port that can be used to translate more C, CO-RE capabilities (compile-once, run-everywhere), generation of BTF, etc. The team is working on simulator and GDB support so that it becomes possible to emulate different kernel contexts where eBPF programs execute. Once the support for simulator is achieved, a suitable board description will then be added to DejaGnu, GNU test framework that would run the GCC test suites on it. Now there will be two C compilers that will generate eBPF so the interoperability between programs generated by the compilers will become a major concern for the team. And this task would require communication between the compiler and the kernel communities. Users on HackerNews seem to be excited about this news, a user commented, “This is very exciting! Nice work to the team that's doing this. I've been waiting to dive into eBPF until the tools mature a bit, so it's great to see eBPF support landing in GCC.” To know more about this news, check out the official mail thread. Other interesting news in programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more  
Read more
  • 0
  • 0
  • 3297
Banner background image

article-image-core-python-team-confirms-sunsetting-python-2-on-january-1-2020
Vincy Davis
10 Sep 2019
3 min read
Save for later

Core Python team confirms sunsetting Python 2 on January 1, 2020

Vincy Davis
10 Sep 2019
3 min read
Yesterday, the team behind Python posted details about the sunsetting of Python 2. As announced before, post January 1, 2020, Python 2 will not be maintained by the Python team. This means that it will no longer receive new features and it will not be improved even if a security problem is found in it. https://twitter.com/gvanrossum/status/1170949978036084736 Why is Python 2 retiring? In the detailed post, the Python team explains that the huge alterations needed in Python 2 led to the birth of Python 3 in 2006. To keep users happy, the Python team kept improving and publishing both the versions together. However, due to some changes that Python 2 couldn’t  handle and scarcity of time required to improve Python 3 faster, the Python team has decided to sunset the second version. The team says, “So, in 2008, we announced that we would sunset Python 2 in 2015, and asked people to upgrade before then. Some did, but many did not. So, in 2014, we extended that sunset till 2020.” The Python team has clearly stated that January 1, 2020 onwards, they will not upgrade or improve the second version of Python even if a fatal security problem crops up in it. Their advice to Python 2 users is to switch to Python 3 using the official porting guide as the former will not support many tools in the future. On the other hand, Python 3 supports graph for all the 360 most popular Python packages. Users can also check out the ‘Can I Use Python 3?’ to find out which tools need to upgrade to Python 3. Python 3 adoption has begun As the end date of Python has been decided earlier on, many implementations of Python have already dropped support for Python 2 or are supporting both Python 2 and 3 for now. Two months ago, NumPy, the library for Python programming language officially dropped support for Python 2.7 in its latest version NumPy 1.17.0. It will only support Python versions 3.5 – 3.7. Earlier this year, pandas 0.24 stopped support for Python 2. Pandas maintainer, Jeff Reback had said, “It's 2019 and Python 2 is slowly trickling out of the PyData stack.” However, not all projects are yet fully on board. There has also been efforts taken to keep Python 2 alive. In August this year, PyPy announced that that they do not plan to deprecate Python 2.7 support as long as PyPy exists. https://twitter.com/pypyproject/status/1160209907079176192 Many users are happy to say goodbye to the second version of Python in favor of building towards a long term vision. https://twitter.com/mkennedy/status/1171132063220502528 https://twitter.com/MeskinDaniel/status/1171244860386480129 A user on Hacker News comments, “In 2015, there was no way I could have moved to Python 3. There were too many libraries I depended on that hadn't ported yet. In 2019, I feel pretty confident about using Python 3, having used it exclusively for about 18 months now. For my personal use case at least, this timeline worked out well for me. Hopefully it works out for most everyone. I can't imagine they made this decision without at least some data backing it up.” Head over to the Python website for more details about about this news. Latest news in Python Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”
Read more
  • 0
  • 0
  • 7428

article-image-microsoft-introduces-static-typescript-as-an-alternative-to-embedded-interpreters-for-programming-mcu-based-devices
Sugandha Lahoti
04 Sep 2019
4 min read
Save for later

Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices

Sugandha Lahoti
04 Sep 2019
4 min read
Microsoft yesterday unveiled Static TypeScript as an alternative to embedded interpreters. Static TypeScript (STS) is an implementation of a Static Compiler for TypeScript which runs in the web browser. It is primarily designed to aid school children in their computer science programming projects. STS is supported by a compiler that is also written in Typescript. It generates machine code that runs efficiently on Microcontrollers in the target RAM range of 16-256kB. Microsoft’s plan behind building Static TypeScript Microcontrollers are typically programmed in C, C++, or in assembly, none of which are particularly beginner friendly. MCUs that can run on modern languages such as JavaScript and Python usually involve interpreters like IoT.js, Duktape, or MicroPython. The problem with interpreters is high memory usage, leaving little room on the devices themselves for the program developers have written. Microsoft therefore decided to come with STS which is a more efficient alternative to the embedded interpreter approach. It is statically typed, which makes for a less surprising programming experience. Features of Static TypeScript STS eliminates most of the “bad parts” of JavaScript; following StrongScript, STS uses nominal typing for statically declared classes and supports efficient compilation of classes using classic techniques for vtables. The STS toolchain runs offline, once loaded into a web browser, without the need for a C/C++ compiler. The STS compiler generates efficient and compact machine code, which unlocks a range of application domains such as game programming for low resource devices . Deployment of STS user programs to embedded devices does not require app or device driver installation, just access to a web browser. The relatively simple compilation scheme for STS leads to surprisingly good performance on a collection of small JavaScript benchmarks, often comparable to advanced, state of the art JIT compilers like V8, with orders of magnitude smaller memory requirements. Differences with TypeScript In contrast to TypeScript, where all object types are bags of properties, STS has at runtime four kinds of unrelated object types: A dynamic map type has named (string-indexed) properties that can hold values of any type A function (closure) type A class type describes instances of a class, which are treated nominally, via an efficient runtime subtype check on each field/method access An array (collection) type STS Compiler and Runtime The STS compiler and toolchain (linker, etc.) are written solely in TypeScript. The source TypeScript program is processed by the regular TypeScript compiler to perform syntactic and semantic analysis, including type checking. The STS device runtime is mainly written in C++ and includes a bespoke garbage collector. The regular TypeScript compiler, the STS code generators, assembler, and linker are all implemented in TypeScript and run both in the web browser and on command line.  The STS toolchain, implemented in TypeScript, compiles STS to Thumb machine code and links this code against a pre-compiled C++ runtime in the browser, which is often the only available execution environment in schools. Static TypeScript is used in all MakeCode editors STS is the core language supported by Microsoft’s MakeCode Framework. MakeCode provides hands on computing education for students with projects. It enables the creation of custom programming experiences for MCU-based devices. Each MakeCode editor targets programming of a specific device or device class via STS. STS supports the concept of a package, a collection of STS, C++ and assembly files, that also can list other packages as dependencies. This capability has been used by third parties to extend the MakeCode editors, mainly to accommodate hardware peripherals for various boards. STS is also used in MakeCode Arcade. With Arcade, STS lets developers of all skill levels easily write cool retro-style pixelated games. The games are designed by the user to be run either inside a virtual game console in the browser or on inexpensive microcontroller-based handhelds. For more in-depth information, please read the research paper. People were quite interested in this development. A comment on Hacker News reads, “This looks very interesting. If all it takes is dropping “with, eval, and prototype inheritance” to get fast and efficient JS execution, I’m all for it.” Other news in tech TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support and more Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs
Read more
  • 0
  • 0
  • 5649

article-image-llvm-9-0-rc3-is-now-out-with-official-risc-v-support-updates-to-systemz-and-more
Bhagyashree R
04 Sep 2019
3 min read
Save for later

LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more

Bhagyashree R
04 Sep 2019
3 min read
Last week, the LLVM team announced the release of LLVM 9.0 RC3, which fixes all the known release blockers. LLVM 9.0 missed its planned release date, which was 28th August. However, with the third RC out, we can expect it to be released soon in the coming weeks along with subprojects like Clang 9.0. LLVM 9.0 will include features like RISC-V official support, gfx10 support for AMDGPU compiler backend, among others. Announcing the release, the team shared on the LLVM mailing list, “There are currently no open release blockers, which means if nothing new comes up, the final release could ship soon and this is what it would look like (except for more release notes, which are still very welcome).” What’s new coming in LLVM 9.0 Official support for RISC-V target In July this year, Alex Bradbury, CTO and Co-Founder of the lowRISC project proposed to make the “experimental” RISC-V LLVM backend “official” for LLVM 9.0. This essentially means that starting with this release, the RISC-V backend will be built by default for LLVM. Developers will be able to use it for standard LLVM/Clang builds out of the box. Explaining the reason behind this update, Bradbury wrote in the proposal, “As well as being more convenient for end users, this also makes it significantly easier for e.g. Rust/Julia/Swift and other languages using LLVM for code generation to do so using the system-provided LLVM libraries. This will make life easier for those working on RISC-V ports of Linux distros encountering issues with Rust dependencies.” Updates to the SystemZ target Starting from LLVM 9.0, the SystemZ target will support the ‘arch13’ architecture. It will include builtins for the new vector instructions, which can be enabled using the ‘-mzvector’ option. The compiler will also support and automatically generate alignment hints on vector load and store instructions. Updates to the AMDGPU target In LLVM 9.0, the function call support is enabled by default. Other updates include improved support for 96-bit loads and stores, gfx10 support, and DPP combiner pass enabled by default. Updates to LLDB LLVM 9.0 will be the last release that will include ‘lldb-mi’ as part of LLDB, however, it will still be available in a downstream GitHub repository. Other changes include color highlighted backtraces and support for DWARF4 (debug_types) and DWARF5 (debug_info) type units. To read the entire list of updates in LLVM 9.0, check out the official release notes. LLVMs Arm stack protection feature turns ineffective when the stack is re-allocated LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces LLVM 8.0.0 releases!  
Read more
  • 0
  • 0
  • 2765
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-cue-an-open-source-data-constraint-language-that-merges-types-and-values-into-a-single-concept
Bhagyashree R
03 Sep 2019
4 min read
Save for later

Introducing CUE, an open-source data constraint language that merges types and values into a single concept

Bhagyashree R
03 Sep 2019
4 min read
Inspired by Google’s General Configuration Language (GCL), a team of developers has now come up with a new language called CUE. It is an open-source data validation language, which aims to simplify tasks that involve defining and using data. Its applications include data validation, data templating, configuration, querying, code generation and even scripting. There are two core aspects of CUE that set it apart from other programming or configuration languages. One, it considers types as values and second, these values are ordered into a lattice, a partially ordered set. Explaining the concept behind CUE the developers write, “CUE merges the notion of schema and data. The same CUE definition can simultaneously be used for validating data and act as a template to reduce boilerplate. Schema definition is enriched with fine-grained value definitions and default values. At the same time, data can be simplified by removing values implied by such detailed definitions. The merging of these two concepts enables many tasks to be handled in a principled way.” These two properties account for the various advantages CUE provides: Advantages of using CUE Improved typing capabilities Most configuration languages today focus mainly on reducing boilerplate and provide minimal typing support. CUE offers “expressive yet intuitive and compact” typing capabilities by unifying types and values. Enhanced readability It enhances readability by allowing the application of a single definition in one file to values in many other files. So, developers need not open various files to verify validity. Data validation You get a straightforward way to define and verify schema in the form of the ‘cue’ command-line tool. You can also use CUE constraints to verify document-oriented databases such as Mongo. Read also: MongoDB announces new cloud features, beta version of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search and more! Easily validate backward compatibility With CUE, you can easily verify whether a newer version of the schema is backward compatible with an older one. CUE considers an API backward compatible if it subsumes the older one or if the old one is an instance of the new one. Allows combining constraints from different sources CUE is commutative, which means you can combine constraints from various sources such as base template, code, client policies, and that too in any order. Allows normalization of data definitions Combining constraints from many resources can also result in a lot of redundancy. CUE’s logical inference engine addresses this by automatically reducing constraints. Its API allows computing and selecting between different normal forms to optimize for a certain representation. Code generation and extraction Currently, CUE can extract definitions from Go code and Protobuf definitions. It facilitates the use of existing sources or smoother transition to CUE by allowing the annotation of existing sources with CUE expressions. Querying data CUE constraints can be used to find patterns in data. You can perform more elaborate querying by using a ‘find’ or ‘query’ subcommand. You can also query data programmatically using the CUE API. On a Hacker News discussion about CUE, many developers compared it with Jsonnet, which a data templating language. A user wrote, “It looks like an alternative to Jsonnet which has schema validation & strict types. IMO, Jsonnet syntax is much simpler, it already has integration with IDEs such as VSCode and Intellij and it has enough traction already. Cue seems like an e2e solution so it's not only an alternative to Jsonnet, it also removes the need of JSON Schema, OpenAPI, etc. so given that it's a 5 months old project, still has too much time to evolve and mature.” Another user added, “CUE improves in Jsonnet in primarily two areas, I think: Making composition better (it's order-independent and therefore consistent), and adding schemas. Both Jsonnet and CUE have their origin in GCL internally at Google. Jsonnet is basically GCL, as I understand it. But CUE is a whole new thing.” Others also praised its features. “When you consider the use of this language within a distributed system it's pretty freaking brilliant,” a user commented. Another user added, “I feel like that validation feature could theoretically save a lot of people that occasional 1 hour of their time that was wasted because of a typo in a config file leading to a cryptic error message.” Read more about CUE and its concepts in detail, on its official website. Other news in Programming languages ‘Npm install funding’, an experiment to sustain open-source projects with ads on the CLI terminal faces community backlash “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more!
Read more
  • 0
  • 0
  • 2726

article-image-microsoft-announces-xlookup-for-excel-users-that-fixes-most-vlookup-issues
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues

Amrata Joshi
02 Sep 2019
3 min read
Last week, the team at Microsoft announced the XLOOKUP feature for Excel users, a successor to the VLOOKUP function, the first lookup function learned by Excel users. XLOOKUP feature gives Excel users an easier way of displaying information in their spreadsheets. Currently, this function is only available to Office 365 testers and the company will be making it more broadly available. XLOOKUP has the ability to look vertically as well as horizontally and it replaces HLOOKUP too.  XLOOKUP just needs 3 arguments for performing the most common exact lookup whereas VLOOKUP required 4. The official post reads, “Let’s consider its signature in the simplest form: XLOOKUP(lookup_value,lookup_array,return_array) lookup_value: What you are looking for lookup_array: Where to find it return_array: What to return”  XLOOKUP overcomes the limitations of VLOOKUP Exact match in XLOOKUP is possible VLOOKUP resulted in a default approximate match of what the user was looking for, rather than the exact match. With XLOOKUP users can now find the exact match. Data can be drawn on both sides  VLOOKUP can draw on the data that’s on the right-hand side of the reference column, so users have to rearrange their data to use the function. With XLOOKUP, users can easily draw on the data both to the left and right, and it also combines VLOOKUP and HLOOKUP into a single function. Column insertions/deletions VLOOKUP’s 3rd argument is the column number so if you insert or delete a column then you have to increment or decrement the column number inside the VLOOKUP. With XLOOKUP users can easily insert or delete columns. Search from the back is now possible With VLOOKUP, users need to reverse the order of the data for finding the last occurrence of the data but with XLOOKUP it is easy for users to search the data from the back. References cells systematically For VLOOKUP, the 2nd argument, table_array, needs to be stretched from the lookup column to the results column. It references more cells which results in unnecessary calculations, reducing the performance of your spreadsheets. XLOOKUP systematically references the cells which don’t lead to complications in calculations. In an email to CNBC, Joe McDaid, Excel’s senior program manager wrote, XLOOKUP is “more powerful than INDEX/MATCH and more approachable than VLOOKUP.” To know more about this news, check out the official post. What’s new in application development this week? Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers Twilio launched Verified By Twilio, that will show customers who is calling them and why      
Read more
  • 0
  • 0
  • 4192

article-image-facebook-is-reportedly-working-on-threads-app-an-extension-of-instagrams-close-friends-feature-to-take-on-snapchat
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Facebook is reportedly working on Threads app, an extension of Instagram's 'Close friends' feature to take on Snapchat

Amrata Joshi
02 Sep 2019
3 min read
Facebook is seemingly working on a new messaging app called Threads that would help users to share their photos, videos, location, speed, and battery life with only their close friends, The Verge reported earlier this week. This means users can selectively share content with their friends while not revealing to others the list of close friends with whom the content is shared. The app currently does not display the real-time location but it might notify by stating that a friend is “on the move” as per the report by The Verge. How do Threads work? As per the report by The Verge,  Threads app appears to be similar to the existing messaging product inside the Instagram app. It seems to be an extension of the ‘Close friends’ feature for Instagram stories where users can create a list of close friends and make their stories just visible to them.  With Threads, users who have opted-in for ‘automatic sharing’ of updates will be able to regularly show their status updates and real-time information  in the main feed to their close friends.. The auto-sharing of statuses will be done using the mobile phone sensors.  Also, the messages coming from your friends would appear in a central feed, with a green dot that will indicate which of your friends are currently active/online. If a friend has posted a story recently on Instagram, you will be able to see it even from Threads app. It also features a camera, which can be used to capture photos and videos and send them to close friends. While Threads are currently being tested internally at Facebook, there is no clarity about the launch of Threads. Direct’s revamped version or Snapchat’s potential competitor? With Threads, if Instagram manages to create a niche around the ‘close friends’, it might shift a significant proportion of Snapchat’s users to its platform.  In 2017, the team had experimented with Direct, a standalone camera messaging app, which had many filters that were similar to Snapchat. But this year in May, the company announced that they will no longer be supporting Direct. Threads look like a Facebook’s second attempt to compete with Snapchat. https://twitter.com/MattNavarra/status/1128875881462677504 Threads app focus on strengthening the ‘close friends’ relationships might promote more of personal data sharing including even location and battery life. This begs the question: Is our content really safe? Just three months ago, Instagram was in the news for exposing personal data of millions of influencers online. The exposed data included contact information of Instagram influencers, brands and celebrities https://twitter.com/hak1mlukha/status/1130532898359185409 According to Instagram’s current Terms of Use, it does not get ownership over the information shared on it. But here’s the catch, it also states that it has the right to host, use, distribute, run, modify, copy, publicly perform or translate, display, and create derivative works of user content as per the user’s privacy settings. In essence, the platform has a right to use the content we post.  Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong protests   Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 1765

article-image-netnewswire-5-0-releases-with-dark-mode-smart-feed-article-list-three-pane-design-and-much-more
Amrata Joshi
02 Sep 2019
3 min read
Save for later

NetNewsWire 5.0 releases with Dark mode, smart feed article list, three-pane design and much more!

Amrata Joshi
02 Sep 2019
3 min read
Last week, the team behind NetNewsWire released NetNewsWire 5.0, a free and open-source RSS reader for Mac. NetNewsWire lets users read articles from their favorite blogs and news sites and keeps a track of what users have already read. So, users need not switch from page to page for reading new articles, instead, NetNewsWire would provide them with a list of new articles. In 2002, NetNewsWire started as Brent Simmons’ project which was sold in 2005 and again in 2011. Simmons finally re-acquired NetNewsWire from Black Pixel last year, and relaunched it as version 5 this year.  Previously, when NetNewsWire began as a project, it was named as “Evergreen” but later on became NetNewsWire in 2018. In this release of NetNewsWire 5.0, JSON Feed support, syncing via Feedbin, Dark Mode, a “Today” smart feed, starred articles, and more such features are included.  Key features included in NetNewsWire 5.0 Three pane-design As per the image given below, NetNewsWire 5.0 features a common three-pane design where the users’ feed and folders are on the extreme left hand side. The article lists for each of the feeds lie in the middle column, and the readers can view the article in the right column. Image Source: The Sweet Setup Dark mode NNW 5 comes with a light and dark mode that ensures it fits well with macOS’s dark mode support. New buttons The buttons have a design which is similar to the Mac design. This version features buttons that can be used for creating a new folder, sending an article to Safari or marking an article as unread. Smart feed article list  The Smart feed article list features the article title, feed’s icon, a short description from the article, as well as the time the article was published, and the publisher’s name. The “Today” smart feed list shows articles that got published in the last 24 hours instead of the articles that were published post midnight on the current date. Unread articles The unread articles in a feed are marked with a bright blue dot and users can double-click an article in the article list to open it directly in Safari. Keyboard shortcuts Users can now mark all articles in a given feed as “read” by pressing CMD + K. Users can now jump between their smart feeds with the combination of CMD + 1/2/3. Users can also jump to the browser by simply hitting CMD + right arrow key. By hitting the spacebar, users can jump through an article.  What is expected in the future? Support for more services NetNewsWire supports only its own local RSS service and Feedbin. And currently, the local RSS service doesn’t support syncing to any other service. Support for more services is expected in the future.  Read-It-Later Support Apps like Reeder and Fiery Feeds (on iOS) are working on their own read-it-later features as of late and NetNewsWire 5 doesn’t support such kind of feature. iOS version The team is currently working on the iOS version of NetNewsWire. It seems users are overall excited about this release. A user commented on HackerNews, “This looks very good, I'm just waiting for Feedly compatibility.” To know more about this news, check out the official post. What’s new in application development this week? Twilio launched Verified By Twilio, that will show customers who is calling them and why Emacs 26.3 comes with GPG key for GNU ELPA package signature check and more! Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks
Read more
  • 0
  • 0
  • 1788
article-image-golang-1-13-module-mirror-index-and-checksum-database-are-now-production-ready
Savia Lobo
02 Sep 2019
4 min read
Save for later

Golang 1.13 module mirror, index, and Checksum database are now production-ready

Savia Lobo
02 Sep 2019
4 min read
Last week, the Golang team announced that the Go module mirror, index, and checksum database are now production-ready thus adding reliability and security to the Go ecosystem. For Go 1.13 module users, the go command will use the module mirror and checksum database by default. New production-ready modules for Go 1.13 module Module Mirror A module mirror is a special kind of module proxy that caches metadata and source code in its own storage system. This allows the mirror to continue to serve source code that is no longer available from the original locations thus speeding up downloads and protect users from the disappearing dependencies. According to the team, module mirror is served at proxy.golang.org, which the go command will use by default for module users as of Go 1.13. For users still running an earlier version of the go command, they can use this service by setting GOPROXY=https://proxy.golang.org in their local environment. Read Also: The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Module Index The module index is served by index.golang.org. It is a public feed of new module versions that become available through proxy.golang.org. Module index is useful for tool developers who want to keep their own cache of what’s available in proxy.golang.org, or to keep up-to-date on some of the newest modules go developers use. Read Also: Implementing Garbage collection algorithms in Golang [Tutorial] Checksum Database Modules introduced the go.sum file, a list of SHA-256 hashes of the source code and go.mod files of each dependency when it was first downloaded. The go command can use these hashes to detect misbehavior by an origin server or proxy that gives a different code for the same version. However, the go.sum file has a limitation, it works entirely by trust based on user’s first use. When a user adds a version of a never seen before dependency, the go command fetches the code and adds lines to the go.sum file quickly. The problem is that those go.sum lines aren’t being checked against anyone else’s and thus they might be different from the go.sum lines that the go command just generated for someone else. The checksum database ensures that the go command always adds the same lines to everyone's go.sum file. Whenever the go command receives new source code, it can verify the hash of that code against this global database to make sure the hashes match, ensuring that everyone is using the same code for a given version. The checksum database is served by sum.golang.org and is built on a Transparent Log (or “Merkle tree”) of hashes backed by Trillian, a transparent, highly scalable and cryptographically verifiable data store. The main advantage of a Merkle tree is that it is tamper-proof and has properties that don’t allow for misbehavior to go undetected, making it more trustworthy. The Merkle tree checks inclusion proofs (if a specific record exists in the log) and “consistency” proofs (that the tree hasn’t been tampered with) before adding new go.sum lines to a user’s module’s go.sum file. This checksum database allows the go command to safely use an otherwise untrusted proxy. Because there is an auditable security layer sitting on top of it, a proxy or origin server can’t intentionally, arbitrarily, or accidentally start giving you the wrong code without getting caught. “Even the author of a module can’t move their tags around or otherwise change the bits associated with a specific version from one day to the next without the change being detected,” the blog mentions. Developers are excited about the launch of the module mirror and checksum database and look forward to checking it out. https://twitter.com/hasdid/status/1167795923944124416 https://twitter.com/jedisct1/status/1167183027283353601 To know more about this news in detail, read the official blog post. Other news in Programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers and more
Read more
  • 0
  • 0
  • 4528

article-image-why-perl-6-is-considering-a-name-change
Bhagyashree R
30 Aug 2019
4 min read
Save for later

Why Perl 6 is considering a name change?

Bhagyashree R
30 Aug 2019
4 min read
There have been several discussions around renaming Perl 6. Earlier this month, another such discussion started when Elizabeth Mattijsen, one of the Perl 6 core developers submitted the "Perl" in the name "Perl 6" is confusing and irritating issue. She suggested changing its name to Camelia, which is also the name of Perl’s mascot. In the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. Their goal was to remove the “historical warts” from the language including the confusion surrounding sigil usage for containers, the ambiguity between the select functions, and more. Based on these principles Perl was redesigned into Perl 6. For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. There are many differences between Perl 5 and Perl 6. For instance, in Perl 5 you need to choose things like concurrency system and processing utilities, but in Perl 6 these features are part of the language itself. In an interview with the I Programmer website, when asked about how the two languages differ, Moritz Lenz, a Perl and Python developer, said, “They are distinct languages from the same family of languages. On the surface, they look quite similar and they are designed using the same principles.” Why developers want to rename Perl 6 Because of the aforementioned differences, many developers find the “Perl 6” name very confusing. This name does not convey the fact that it is a brand new language. Developers may instead think that it is the next version of the Perl language. Some others may believe that it is faster, more stable, or better compared to the earlier Perl language. Also, many search engines will sometimes show results for Perl 5 instead of Perl 6. “Having two programming languages that are sufficiently different to not be source compatible, but only differ in what many perceive to be a version number, is hurting the image of both Perl 5 and Perl 6 in the world. Since the word "Perl" is still perceived as "Perl 5" in the world, it only seems fair that "Perl 6" changes its name,” Mattijsen wrote in the submitted issue. To avoid this confusion Mattijsen suggests an alternative name: Camelia. Many developers agreed with her suggestion. A developer commented on the issue, “The choice of Camelia is simple: search for camelia and language already takes us to Perl 6 pages. We can also keep the logo. And it's 7 characters long, 6-ish. So while ofun and all the others have their merits, I prefer Camelia.” In addition to Camelia, Raku is also a strong contender for the new name for Perl 6, which was suggested by Larry Wall, the creator of Perl. A developer supporting Raku said, “In particular, I think we need to discuss whether "Raku", the alternative name Larry proposed, is a viable possibility. It is substantially shorter than "Camelia" (and hits the 4-character sweet spot), it's slightly more searchable, has pleasant associations of "comfort" or "ease" in its original Japanese, in which language it even looks a little like our butterfly mascot.” Some developers were not much convinced with the idea of renaming the language and think that this rather adds more to the confusion. A developer added, “I don't see how Perl 5 is going to benefit from this. We're freeing the name, yes. They're free to reuse the versions now in however way they like, yes. Are they going to name the successor to 5.30 “Perl 6”? Of course not – that would cause more confusion, make them look stupid and make whatever spiritual successor of Perl 6 we could think of look obsolete. Would they go up to Perl 7 with the next major change? Perhaps, but they can do that anyway: they're another grown-up language that can make its own decisions :) I'm not convinced it would do anything to improve Perl 6's image either. Being Perl 6 is “standing on the shoulders of giants”. Perl is a strong brand. Many people have left it because of the version confusion, yes. But I don't imagine these people coming back to check out some new Camelia language that came out. They might, however, decide to give Perl 6 a shot if they start seeing some news about it – “oh, I was using Perl 15 years ago... is this still a thing? Is that new famous version finally being out and useful? I should check it out!” You can read the submitted issue and discussion on GitHub for more details. What’s new in programming this week Introducing Nushell: A Rust-based shell React.js: why you should learn the front end JavaScript library and how to get started Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more
Read more
  • 0
  • 0
  • 4901

article-image-twilio-launched-verified-by-twilio-that-will-show-customers-who-is-calling-them-and-why
Amrata Joshi
30 Aug 2019
3 min read
Save for later

Twilio launched Verified By Twilio, that will show customers who is calling them and why

Amrata Joshi
30 Aug 2019
3 min read
This month at the Twilio SIGNAL 2019 conference, Twilio, announced Verified By Twilio which help customers to know caller details. Verified By Twilio will also help them know which calls are real and which are fake or spam calls. For this, the company is partnering with major call identification apps like CallApp, Hiya, Robokiller, and YouMail to help more than 200 million consumers. Verified By Twilio is expected to be fully available by early 2020. Verified by Twilio aims to show genuine callers Due to privacy concerns, customers usually tend to reject a number of business calls daily, be it legitimate or illegitimate. As per Hiya’s State of the Phone Call report, Americans answer just a little more than 50% of the calls that they receive on their cell phones. As per a recent Consumer Reports survey, around 70% of consumers do not answer a call if the number flashes up as anonymous.   But in this case, if the customer knows in advance as to who is calling and why then there is a possibility of such business calls not going unanswered. The project Verified by Twilio aims to let users know about why are they getting a call even before they actually press the answer button. It also aims to verify the business or organization that is calling for each of the calls. The official press release reads, “For example, if an airline company is trying to contact a customer about a cancelled flight, as the call comes in, the consumer will see the name of the airline with a short note indicating why they are calling. With that information, that person can make the decision about stepping out of a meeting or putting another call on hold to answer this critically important call.” Jeff Lawson, co-founder and chief executive officer, Twilio, said in a statement, “At Twilio, we want to help consumers take back their phones, so that when their phone rings, they know it's a trusted, wanted call.”  Lawson further added, “A lot of work is being done in the industry to stop unwanted calls and phone scams, and we want to ensure consumers continue to receive the wanted calls. Verified By Twilio is aimed at providing consumers with the context to know who's calling so they answer the important and wanted calls happening in their lives, such as from doctors, schools, and banks.” How Twilio plans to verify businesses? Twilio is now creating a repository for hosting verified information of businesses and organizations as well as their associated brands that will populate the screens as soon as a call comes in. With the programmability of the Twilio platform, it will be possible for businesses and organizations to dynamically assign a purpose for each call to give better context. Twilio plans to involve no costs for businesses and organizations who would want to join the private beta.  With Verified By Twilio, businesses and organizations might improve their overall engagement with their customers as the chances of their calls getting answered would be high and in this way, they would establish trust in traditional communications. To know more about this news, check out the official post. What’s new in Application development this week? Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 2495
article-image-emacs-26-3-comes-with-gpg-key-for-gnu-elpa-package-signature-check-and-more
Amrata Joshi
30 Aug 2019
2 min read
Save for later

Emacs 26.3 comes with GPG key for GNU ELPA package signature check and more!

Amrata Joshi
30 Aug 2019
2 min read
Last week, the team behind Emacs, the customizable libre text editor announced the first release candidate of Emacs 26.3. Again on Wednesday, the team announced a maintenance release, Emacs 26.3. Key features in Emacs 26.3? New GPG key for GNU ELPA Emacs 26.3 now features a new GPG (GNU Privacy Guard) key for GNU ELPA package signature checking (GNU ELPA package is the default package repository for GNU Emacs). New option to help-enable-completion-auto-load This release also features a new option ‘help-enable-completion-auto-load’ that allows users to disable the new feature that was introduced in Emacs 26.1 which was responsible for loading files during the completion of ‘C-h f’ and ‘C-h v’. Supports the Japanese Era name This release now supports the new Japanese Era name. Few users expected more changes in this release, a user commented on HackerNews, “So ... only two relevant changes this time?” While others think that there are editors comparatively better than Emacs. Another user commented, “I don't want to start a flamewar, but I moved most things I was doing in Emacs to Textadept a while back because I found Textadept more convenient. That's not to say TA does everything you can do in Emacs, but it replaced all of the scripting I was doing with Emacs. You have the full power of Lua inside TA. Emacs always has a lag when I start it up, whereas TA is instant. I slowly built up functionality inside TA to the point that I realized I could replace everything I was doing in Emacs.” To know more about this news, check out the mailing thread. What’s new in application development this week? Google Chrome 76 now supports native lazy-loading Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks #Reactgate forces React leaders to confront community’s toxic culture head on    
Read more
  • 0
  • 0
  • 2788

article-image-microsoft-announces-its-support-for-bringing-exfat-in-the-linux-kernel-open-sources-technical-specs
Bhagyashree R
29 Aug 2019
3 min read
Save for later

Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs

Bhagyashree R
29 Aug 2019
3 min read
Yesterday, Microsoft announced that it supports the addition of its Extended File Allocation Table (exFAT) file system in the Linux kernel and publicly released its technical specifications. https://twitter.com/OpenAtMicrosoft/status/1166742237629308928 Launched in 2006, the exFAT file system is the successor to Microsoft's FAT and FAT32 file systems that are widely used in a majority of flash memory storage devices such as USB drives and SD cards. It uses 64-bits to describe file size and allows for clusters as large as 32MB. As per the specification, it was implemented with simplicity and extensibility in mind. John Gossman, Microsoft Distinguished Engineer, and Linux Foundation Board Member wrote in the announcement, “exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.” As exFAT was proprietary previously, mounting these flash drives and cards on Linux machines required installing additional software such as FUSE-based exFAT implementation. Supporting exFAT in the Linux kernel will provide users its full-featured implementation and can also be more performant as compared to the FUSE implementation. Also, its inclusion in OIN's Linux System Definition will allow its cross-licensing in a royalty-free manner. Microsoft shared that the exFAT code incorporated into the Linux kernel will be licensed under GPLv2. https://twitter.com/OpenAtMicrosoft/status/1166773276166828033 In addition to supporting exFAT in the Linux kernel, Microsoft also hopes that its specifications become a part of the Open Invention Network’s (OIN) Linux definition. Keith Bergelt, OIN's CEO, told ZDNet, "We're happy and heartened to see that Microsoft is continuing to support software freedom. They are giving up the patent levers to create revenue at the expense of the community. This is another step of Microsoft's transformation in showing it's truly committed to Linux and open source." The next edition of the Linux System Definition is expected to publish in the first quarter of 2020, post which any member of the OIN will be able to use exFAT without paying a patent royalty. The Linux Foundation also appreciated Microsoft's move to bring exFAT in the Linux kernel: https://twitter.com/linuxfoundation/status/1166744195199115264 Other developers also shared their excitement. A Hacker News user commented, “OMG, I can't believe we finally have a cross-platform read/write disk format. At last. No more Fuse. I just need to know when it will be available for my Raspberry Pi.” Read the official announcement by Microsoft to know more in detail. Microsoft Edge Beta is now ready for you to try Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms CERN plans to replace Microsoft-based programs with an affordable open-source software
Read more
  • 0
  • 0
  • 3282