Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Programming

573 Articles
article-image-microsoft-releases-cascadia-code-version-1909-16-it-is-the-latest-monospaced-font-for-windows-terminal-and-visual-studio-code
Amrata Joshi
19 Sep 2019
2 min read
Save for later

Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code

Amrata Joshi
19 Sep 2019
2 min read
Yesterday the team at Microsoft released Cascadia Code version 1909.16, the latest monospaced font for command-line applications like Windows Terminal and code editors like Visual Studio Code. This year in May, the team announced about this font at the Microsoft Build conference. Cascadia Code version 1909.16 is now publicly available on GitHub and developers can contribute to the font on GitHub. This code is licensed under the SIL Open Font license on GitHub. Cascadia Code supports programming ligatures that are used while writing codes as they can create new glyphs by combining characters. These ligatures make the code more readable and user-friendly. The name “Cascadia Code” comes from the Windows Terminal project. The codename for Windows Terminal was Cascadia before it was released. https://twitter.com/cinnamon_msft/status/1130864977185632256 The official post reads, “As an homage to the Terminal, we liked the idea of naming the font after its codename. We added Code to the end of the font name to help indicate that this font was intended for programming. Specifically, it helps identify that it includes programming ligatures.” Users can install Cascadia Code font from the GitHub repository’s releases page or receive it in the next update of Windows Terminal. Users are overall excited about this news and they are liking the fact that even the official announcement blog post is using the Cascadia Code font. They are also appreciating the team for adding support for programming ligatures. https://twitter.com/bitbruder/status/1174432721038389253 https://twitter.com/singhkays/status/1174541216261652482 https://twitter.com/FiraCode/status/1174608467442720768 A user commented on HackerNews, “I really like this. Feels easy on the eyes (at least to me). I've used Fira Code for as long as I can remember, but going to give this a go!” Other interesting news in programming DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements    
Read more
  • 0
  • 0
  • 3487

article-image-devops-platform-for-coding-gitlab-reached-more-than-double-valuation-of-2-75-billion-than-its-last-funding-and-way-ahead-of-its-ipo-in-2020
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020

Fatema Patrawala
19 Sep 2019
4 min read
Yesterday, GitLab, a San Francisco based start-up, raised $268 million in a Series E funding round valuing the company at $2.75 billion, more than double of its last valuation. In the Series D round funding of $100 million the company was valued at $1.1 billion; and with today’s announcement, the valuation has more than doubled in less than a year. GitLab provides a DevOps platform for developing and collaborating on code and offers a single application for companies to draft, develop and release code. The product is used by companies like Delta Air Lines Inc., Ticketmaster Entertainment Inc. and Goldman Sachs Group Inc etc. The Series E funding round was led by investors including Adage Capital Management, Alkeon Capital, Altimeter Capital, Capital Group, Coatue Management, D1 Capital Partners, Franklin Templeton, Light Street Capital, Tiger Management Corp. and Two Sigma Investments. GitLab plans to go public in November 2020 According to Forbes, GitLab has already set November 18, 2020 as the date for going public. The company seems to be primed and ready for the eventual IPO. As for the $268 million, it gives the company considerable time ahead of the planned event and also gives the flexibility to choose how to take the company public. “One other consideration is that there are two options to go public. You can do an IPO or direct listing. We wanted to preserve the optionality of doing a direct listing next year. So if we do a direct listing, we’re not going to raise any additional money, and we wanted to make sure that this is enough in that case,” Sid Sijbrandij, Gitlab co-founder and CEO explained in an interview for TechCrunch. He further adds, that the new funds will be used to add monitoring and security to GitLab’s offering, and to increase the company’s staff to more than 1,000 employees this year from 400 employee strength currently. GitLab is able to add workers at a rapid rate, since it has an all-remote workforce. GitLab wants to be independent and chooses transparency for community Sijbrandij says that the company made a deliberate decision to be transparent early on. Being based on an open-source project, it’s sometimes tricky to make the transition to a commercial company, and sometimes that has a negative impact on the community and the number of contributions. Transparency was a way to combat that, and it seems to be working. He reports that the community contributes 200 improvements to the GitLab open-source products every month, and that’s double the amount of just a year ago, so the community is still highly active. He did not ignore the fact that Microsoft acquired GitHub last year for $7.5 billion. And GitLab is a similar kind of company that helps developers manage and distribute code in a DevOps environment. He claims in spite of that eye-popping number, his goal is to remain an independent company and take this through to the next phase. “Our ambition is to stay an independent company. And that’s why we put out the ambition early to become a listed company. That’s not totally in our control as the majority of the company is owned by investors, but as long as we’re more positive about the future than the people around us, I think we can we have a shot at not getting acquired,” he said. Community is happy with GitLab’s products and services Overall the community is happy with this news and GitLab’s products and services. One of the comments on Hacker News reads, “Congrats, GitLab team. Way to build an impressive business. When anybody tells you there are rules to venture capital — like it’s impossible to take on massive incumbents that have network effects — ignore them. The GitLab team is doing something phenomenal here. Enjoy your success! You’ve earned it.” Another user comments, “We’ve been using Gitlab for 4 years now. What got us initially was the free private repos before github had that. We are now a paying customer. Their integrated CICD is amazing. It works perfectly for all our needs and integrates really easily with AWS and GCP. Also their customer service is really damn good. If I ever have an issue, it’s dealt with so fast and with so much detail. Honestly one of the best customer service I’ve experienced. Their product is feature rich, priced right and is easy. I’m amazed at how the operate. Kudos to the team” Other interesting news in programming Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!
Read more
  • 0
  • 0
  • 2756

article-image-microsoft-open-sources-its-c-standard-library-stl-used-by-msvc-tool-chain-and-visual-studio
Vincy Davis
18 Sep 2019
4 min read
Save for later

Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio

Vincy Davis
18 Sep 2019
4 min read
Last week, Microsoft open-sourced its implementation of the C++ Standard Library, also known as STL. The library is shipped along with MSVC (Microsoft Visual C++ compiler) toolset and the Visual Studio IDE. This announcement was made by MSVC team at the CppCon 2019 conference, two days ago. Users can use the C++ library repo for participating in the STL's development by reporting issues and commenting on pull requests. The MSVC team is still working on migrating the C++ Standard Library to GitHub. Currently, the Github repository contains all of MSVC's product source code including a new CMake build system and a README. The team also plans to use the GitHub issues to track C++20 features, LWG issues, conformance bugs, performance improvements, and other todos. The roadmap and iteration plans of the C++ Standard Library is also under progress. Why Microsoft open-sourced the C++ Standard Library? Microsoft has open-sourced STL to allow it’s users easy access to all the latest developments in C++ by trying out latest changes and improving pull requests by reviewing them. The MSVC team hopes that as C++ standardization accelerates, it will be easier for users to accept the major features. Microsoft chose to open-source STL particularly due to its unique design and fast-evolving nature when compared to other MSVC libraries and compiler. It is also “easy to contribute to, and somewhat loosely coupled, unlike the compiler.” The official blog post adds, “We also want to contribute back to the C++ community by making it possible to take our implementations of major features.” What are the primary goals of the C++ Standard Library? Microsoft is implementing the latest C++ Working Draft, which will eventually become the next C++ International Standard. The goals of the Microsoft C++ Standard Library are to be conformant to spec, extremely fast, usable, and extensive compatibility. Speed being the core strength of C++, STL needs to be extremely fast at runtime. Thus, the MSVC team spends more time on the optimization of the C++ Standard Library than the most general-purpose libraries. They are also working on parts of the programming experience like compiler throughput, diagnostic messages, and debugging checks. They are also keeping VS 2019 binary-compatible with VS 2017 and VS 2015. They consider source compatibility to be important, but not all-important; breaking source compatibility can be an acceptable cost if done for the right reasons in the right way. The blog post states that MSVC’s STL is distributed under the Apache License v2.0 with LLVM Exceptions and is distinct from the libc++ library. However, if any libc++’s maintainers are interested in taking feature implementations from MSVC’s STL or in collaborating on the development of new features in both libraries simultaneously, the MSVC team will help irrespective of the licensing. Users have welcomed Microsoft’s move to open-source it’s C++ Standard Library (STL). A Redditor says, “Thank you! Absolutely amazing. It's been one of my guilty pleasures ever since I started with C++ to prod about in your internals to see how stuff works so this is like being taken to the magical chocolate factory for me.” Another user comments, “thank you for giving back to the open source world. ❤🤘” Interested readers can learn how to build with the Native Tools Command Prompt and a Visual Studio IDE on Github. Latest news in Tech Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment As Kickstarter reels in the aftermath of its alleged union-busting move, is the tech industry at a tipping point? Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements
Read more
  • 0
  • 0
  • 5305
Banner background image

article-image-linux-5-3-releases-with-support-for-amd-navi-gpus-zhaoxin-x86-cpus-and-power-usage-improvements
Vincy Davis
18 Sep 2019
4 min read
Save for later

Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements

Vincy Davis
18 Sep 2019
4 min read
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.3 on the Linux Kernel Mailing List (lkml). This major release brings new support for AMD Navi GPUs, the umwait x86 instructions, and Intel speed select. Linux 5.3 also presents a new pidfd_open(2) system call and 16 millions new IPv4 addresses in the 0.0.0.0/8 range. There are also many new drivers and improvements in this release. The previous version, Linux 5.2 was released more than two months ago. It included Sound Open Firmware project, new mount API, improved pressure stall information and more. What’s new in Linux 5.3? pidfd_open(2) system call The PID (process identification number) issue has been present in Linux, for a long time. The Linux 5.1 release had the pidfd_send_signal which allowed processes to send signals to stable ‘pidfd’ handles, even after PID reuse. Linux 5.2 added the CLONE_PIDFD to clone(2) feature which enabled users to create PIDs that were usable with pidfd_send_signal(2). However, this created problems for Android's low memory killer (LMK). Thus, Linux 5.3 has a new pidfd_open(2) syscal to complete the functionality needed to deal with the PID reuse issue. This release also has an added polling support for pidfd to allow process managers to identify when a process dies in a race-free way. Support for AMD Navi GPUs Linux 5.3 provides initial support for the AMD Navi GPUs in the amdgpu driver. The AMD Navi GPUs are the new AMD RX5700 GPUs which became available recently. This release also adds support for the core driver,(DCN2) displays, GFX and compute (GFX10), System DMA (SDMA 5), multimedia decode and encode (VCN2) and power management. Zhaoxin x86 CPU support This release also supports the Zhaoxin x86 Processors. The report states, “The architecture of the ZX family of processors is a continuation of VIA's Centaur Technology x86-64 Isaiah design.” Intel Speed Select support for easier power tuning Linux 5.3 also adds support for Intel Speed Select, which is a feature only supported on specific Xeon servers. The power management technology allows users to configure their servers for throughput and per-core performance settings. The Intel Speed Select enables prioritization of performance for certain workloads running on specific cores. 16 millions of new IPv4 addresses This release makes the 0.0.0.0/8 IPv4 range acceptable by Linux as a valid address range and available for 16 million new IPv4 addresses. The IPv4 address space includes hundreds of millions of addresses which were previously reserved for future use. The new IPv4 Cleanup Project has made the addresses usable now. Utilization clamping support in the task scheduler This release adds utilization clamping support to the task scheduler. This is a refinement of the energy-aware scheduling framework for power-asymmetric systems (like ARM big.LITTLE) added in Linux 5.0. Per-task clamping attributes can be set through sched_setattr(2). This feature intends to replace the hacks that Android had developed to achieve the same result. Improvements in Core Io_uring Added support for recvmsg() Added support for sendmsg() Added support for Submission Queue Entry links. Task scheduler New tracepoints added which will be required for energy-aware scheduling testing CONFIG_PREEMPT_RT It will help the RT patchset to be fully integrated into the mainline kernel in the future merge Improvements in Memory management Smaps: It is used to report separate components for the PSS in smaps_rollup proc file. This will help in tuning the memory manager behavior in consumer devices, particularly for the mobile devices commit. Swap: It uses rbtree for swap_extent instead of a linked list. Thus, it improves swap performance when there are lots of processes accessing the swap device concurrently. Linux developers are happy with the Linux 5.3 features, especially the new support for AMD Navi GPUs. https://twitter.com/NoraDotCodes/status/1173621317033218049 A Redditor comments, “I'm really glad to hear that Linux is catching up to the navi gpus as I just invested in all that and after building a new box in attempting to do GPU pass-through for a straight up Linux host and windows VM realized that things aren't quite there yet.” Another user says, “Looks like some people were eagerly waiting for this release. I'm glad the Linux kernel keeps evolving and improving.” These are some of the selected updates in Linux 5.3. You may go through the release notes for more details. Latest news in Linux A recap of the Linux Plumbers Conference 2019 Lilocked ransomware (Lilu) affects thousands of Linux-based servers IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation
Read more
  • 0
  • 0
  • 3349

article-image-nvim-v0-4-0-releases-with-new-api-functions-lua-library-ui-events-and-more
Amrata Joshi
17 Sep 2019
2 min read
Save for later

NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!

Amrata Joshi
17 Sep 2019
2 min read
Last Sunday, the team behind Neovim, a project that refactors Vim source code released NVIM v0.4.0. This release received approximately 2700 commits since v0.3.4, which was a non-maintenance release. It comes with improvements to documentation, test/CI infrastructure, internal subsystems and 700+ patches that are merged from Vim. What’s new in NVIM v0.4.0? API functions This release comes with a new function, nvim_create_buf that is used for creating various types of buffers including nvim_get_context and nvim_load_context. The nvim_input_mouse function is used for performing mouse actions. Users can create floating windows with nvim_open_win. UI events The new UI events including redraw.grid_destroy, redraw.hl_group_set, redraw.msg_clear, and much more are included. Lua library NVIM v0.4.0 introduces "Nvim-Lua standard library" that comes with string functions and generates documentation from docstrings. Multigrid windows It now features windows that are isolated internally and can be drawn on separate grids. These windows are sent as distinct objects to UIs so that UIs can control the layout.   Support for sign columns It comes with support for multiple auto-adjusted sign columns, so users will  be shown extra columns to automatically accommodate all the existing signs. Major changes It has improved Lua error messages and fixed menu_get(). In NVIM v0.4.0, jemalloc, general purpose malloc implementation has been removed. In this release, the 'scrollback' option is more consistent and future-proof.  To know more about this news, check out the release notes. Other interesting news in programming A recap of the Linux Plumbers Conference 2019 GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers TextMate 2.0, the text editor for macOS releases  
Read more
  • 0
  • 0
  • 2565

article-image-llvms-clang-9-0-to-ship-with-experimental-support-for-opencl-c17-asm-goto-initial-support-and-more
Bhagyashree R
17 Sep 2019
2 min read
Save for later

LLVM’s Clang 9.0 to ship with experimental support for OpenCL C++17, asm goto initial support, and more

Bhagyashree R
17 Sep 2019
2 min read
The stable release of LLVM 9.0 is expected to come in the next few weeks along with subprojects like Clang 9.0. As per the release notes, the upcoming Clang 9.0 release will come with experimental support for C++17 features in OpenCL, asm goto support, and much more. Read also: LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more What’s new coming in Clang 9.0.0 Experimental support for C++17 features in OpenCL Clang 9.0.0 will have experimental support for C++17 features in OpenCL. The experimental support includes improved address space behavior in the majority of C++ features. There is support for OpenCL-specific types such as images, samplers, events, and pipes. Also, the invoking of global constructors from the host side is possible using a specific, compiler-generated kernel. C language updates in Clang Clang 9.0.0 includes the __FILE_NAME__ macro as a Clang specific extension that is supported in all C-family languages. It is very similar to the __FILE__ macro except that it will always provide the last path component when possible. Another C language-specific update is the initial support for asm goto statements to control flow from inline assembly to labels. This construct will be mainly used by the Linux kernel (CONFIG_JUMP_LABEL=y) and glib. Building Linux kernels with Clang 9.0 With the addition of asm goto support, the mainline Linux kernel for x86_64 is now buildable and bootable with Clang 9. The team adds, “The Android and ChromeOS Linux distributions have moved to building their Linux kernels with Clang, and Google is currently testing Clang built kernels for their production Linux kernels.” Read also: Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel Build system changes Previously, the install-clang-headers target used to install clang’s resource directory headers. With Clang 9.0, this installation will be done by the install-clang-resource-headers target. “Users of the old install-clang-headers target should switch to the new install-clang-resource-headers target. The install-clang-headers target now installs clang’s API headers (corresponding to its libraries), which is consistent with the install-llvm-headers target,” the release notes read. To know what else is coming in Clang 9.0, check out its official release notes. Other news in Programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices
Read more
  • 0
  • 0
  • 2761
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-releases-jdk-13-with-switch-expressions-and-text-blocks-preview-features-and-more
Bhagyashree R
17 Sep 2019
3 min read
Save for later

Oracle releases JDK 13 with switch expressions and text blocks preview features, and more!

Bhagyashree R
17 Sep 2019
3 min read
Yesterday, Oracle announced the general availability of Java SE 13 (JDK 13) and that its binaries are expected to be available for download today. In addition to improved performance, stability, and security, this release comes with two preview features, switch expressions and text blocks. This announcement coincides with the commencement of Oracle’s co-located OpenWorld and Code One conferences happening from September 16-17 2019 at San Francisco. Oracle’s director of Java SE Product Management, Sharat Chander, wrote in the announcement, “Oracle offers Java 13 for enterprises and developers. JDK 13 will receive a minimum of two updates, per the Oracle CPU schedule, before being followed by Oracle JDK 14, which is due out in March 2020, with early access builds already available.” This release is licensed under the GNU General Public License v2 with the Classpath Exception (GPLv2+CPE). For those who are using Oracle JDK release as part of an Oracle product or service, it is available under a commercial license. Read also: Oracle releases open-source and commercial licenses for Java 11 and later What’s new in JDK 13 JDK 13 includes the implementation of the following Java Enhancement Proposals (JEPs): Dynamic class-data sharing archives (JEP 350) JEP 350 improves the usability of application class-data sharing to allow the dynamic archiving of classes once the execution of a Java application is completed. The archived classes will consist of all loaded application classes and library classes that are not present in the default, base-layer CDS archive. Uncommit unused memory (JEP 351) Previously, the z garbage collector did not uncommit and returned memory to the operating system, even if it was left unused for a long time. With JEP 351 implemented in JDK 13, the z garbage collector will return unused heap memory to the operating system. Read also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Reimplement the Legacy Socket API (JEP 353) In JDK 13, the underlying implementation used by the ‘java.net.Socket’ and ‘java.net.ServerSocket APIs’ is replaced by “a simpler and more modern implementation that is easy to maintain and debug,” as per JEP 353. This new implementation aims to make adapting to user-mode threads or fibers, that is currently being explored in Project Loom, much easier. Switch expressions preview (JEP 354) The switch expressions feature proposed in JEP 354 allows using ‘switch’ as both a statement or an expression. Developers will now be able to use both the traditional ‘case ... : labels’ (with fall through) or new ‘case ... -> labels’ (with no fall through). This preview feature in JDK 13 aims to simplify everyday coding and prepare the way for the use of pattern matching (JEP 305) in a switch. Text blocks preview (JEP 355) The text blocks preview feature proposed in JEP 355 makes it easy to express strings that take up several source code lines. This preview feature aims to improve both “the readability and the writeability of a broad class of Java programs to have a linguistic mechanism for denoting strings more literally than a string literal.” Check out the official announcement by Oracle to know what else has landed in JDK 13. Other news in programming Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?
Read more
  • 0
  • 0
  • 2695

article-image-darklang-available-in-private-beta
Fatema Patrawala
17 Sep 2019
4 min read
Save for later

Darklang available in private beta

Fatema Patrawala
17 Sep 2019
4 min read
Yesterday, the team behind Dark programming language has unveiled Darklang’s private beta version. Dark is a holistic programming language, editor, and infrastructure for building backends. Developers can write in the Dark language, using the Dark editor, and the program is hosted on Dark’s infrastructure. As a result, they can code without thinking about infrastructure, and have safe instant deployment, which the team is calling “deployless” development. According to the team, backends today are too complicated to build and they have designed Dark in a way to reduce that complexity. Ellen Chisa, CEO of the Dark says, “Today we’re releasing two videos showing how Dark works. And demonstrate how to build a backend application (an office sign-in app) in 10 minutes.” Paul Biggar, the CTO also talks about the Dark’s philosophy and the details of the language, the editor and the infrastructure. He also shows how they make “deployless” safe with feature flags and versioning, and how Dark allows to introspect and debug live requests. Alpha users of Darklang build backends for web and mobile applications The Dark team says that during the private alpha, developers have built entire backends in Dark. Chase Olivieri built Altitude, a flight deal subscription site. Julius Tarng moved the backend of Tokimeki Unfollow to Dark for scalability. Jessica Greenwalt & Pixelkeet ported Birb, their internal project tracker, into a SaaS for other design studios to use. The team has also seem alpha users build backends for web and mobile applications, internal tools, Slackbots, Alexa skills, and personal projects. And they’ve even started building parts of Dark in Dark, including their presence service and large parts of the signup flow. Additionally, the team will let you in the private beta of Darklang immediately if the developers have their project well-scoped and ready to get started. Community unhappy with private version, and expect open-source On Hacker News, users are discussing that in this time and age if there is any new programming language, it has to be open-source. One of them commented, “Is there an open source version of the language? ...bc I'm not touching a programming language with a ten foot pole if it hasn't got at least two implementations, and at least one open source :| Sure, keep the IDEs and deployless infrastructure and all proprietary, but a core programming language in 2019 can only be open-source. Heck, even Microsoft gets it now.” Another one says, “They are 'allowing' people into a private beta of a programming language? Coupled with the fact it is not open source and has a bunch of fad ad-tech videos on the front page this is so many red flags.” While others compare Dark with different programming languages, mainly Apex, Rust and Go. A user comment reads, “I see a lot of Parse comparisons, but for me this is way more like Force.com from Salesforce and the Apex language. Proprietary language (Apex, which is Java 6-ish), complete vertical integration, no open source spec or implementation.” Another one says, “Go - OK, it has one implementation (open-source), but it's backed by one big player (Google) and used by many others... also the simplicity at core design decisions sound like the kind of choices that would make an alternative compiler easier to implement than for other languages Rust - pretty fast growing open-source community despite only one implementation... but yeah I'm sort of worried that Rust is a "hard to implement" kind of language with maybe a not high enough bus factor... similar worries for Julia too But tbh I'm not drawn much to either Go and Rust for other reasons - Go is too verbose for my taste, no way to write denser code that highlights the logic instead of the plumbing, and it has a "dumb" type system, Rust seems a really bad choice for rapid prototyping and iteration which is what I care about now.” Other interesting news in programming this week Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others TextMate 2.0, the text editor for macOS releases GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers
Read more
  • 0
  • 0
  • 3029

article-image-a-recap-of-the-linux-plumbers-conference-2019
Vincy Davis
17 Sep 2019
4 min read
Save for later

A recap of the Linux Plumbers Conference 2019

Vincy Davis
17 Sep 2019
4 min read
This year’s Linux Plumbers Conference concluded on the 11th of September 2019. This invitation-only conference for Linux top kernel developers was held in Lisbon, Portugal this year. The conference brings developers working on the plumbing of Linux - kernel subsystems, core libraries, windowing systems, etc. to think about core design problems. Unlike most tech conferences that generally discuss the future of the Linux operating system, the Linux Plumbers Conference has a distinct motive behind it. In an interview with ZDNet, Linus Torvalds, the Linux creator said, “The maintainer summit is really different because it doesn't even talk about technical issues. It's all about the process of creating and maintaining the Linux kernel.” In short, the developers attending the conference know confidential and intimate details about some of the Linux kernel subsystems, and maybe this is why the conference has the word ‘Plumbers’ in it. Read Also: Introducing kdevops, a modern DevOps framework for Linux kernel development The conference is divided into several working sessions focusing on different plumbing topics. This year the Linux Plumbers Conference had over 18 microconferences, with topics like RISC-V, tracing, distribution kernels, live patching, open printing, toolchains, testing and fuzzing, and more. Some Micro conferences covered in Linux Plumbers Conference 2019 The Linux Plumbers 2019 RISC-V MC (microconference) focussed on finding the solutions for changing the kernel. In the long run, this discussion of changing the kernel is expected to result in active developer participation for code review/patch submissions for a better and more stable kernel for RISC-V. Some of the topics covered in RISC-V MC included RISC-V platform specification progress and fixing the Linux boot process in RISC-V. The Plumbers Live Patching MC had an open discussion for all the involved stakeholders to discuss the live patching related issues such that it will help in making the live patching of the Linux kernel and the Linux userspace live patching feature complete. This open discussion has been a success in past conferences as it leads to useful output which helps in pushing the development of the live patching forward. Some of the topics included all the happenings in kernel live in the last one year, API for state changes made by callbacks and source-based livepatch creation tooling. The System Boot and Security MC concentrated on open source security, including bootloaders, firmware, BMCs and TPMs. The potential speakers and key participants for the MC had everybody interested in GRUB, iPXE, coreboot, LinuxBoot, SeaBIOS, UEFI, OVMF, TianoCore, IPMI, OpenBMC, TPM, and other related projects and technologies. The main goal of this year’s Remote Direct Memory Access (RDMA) MC was to resolve the open issues in RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics, RDMA and DAX, contiguous system memory allocations for userspace which is unresolved from 2017 and many more. Other areas of interest included multi-vendor virtualized 'virtio' RDMA, non-standard driver features and their impact on the design of the subsystem, and more. Read Also: Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range Linux developers who attended the Plumbers 2019 conference were appreciative of the conference and took to Twitter to share their experiences. https://twitter.com/russelldotcc/status/1172193214272606209 https://twitter.com/odeke_et/status/1173108722744225792 https://twitter.com/jwboyer19/status/1171351233149448193 The videos of the conference are not out yet. The team behind the conference has tweeted that they will be uploading them soon. Keep checking this space for more details about the Linux Plumbers Conference 2019. Meanwhile, you can check out last year’s talks on YouTube. Latest news in Linux Lilocked ransomware (Lilu) affects thousands of Linux-based servers Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation
Read more
  • 0
  • 0
  • 2876

article-image-gnu-community-announces-parallel-gcc-for-parallelism-in-real-world-compilers
Savia Lobo
16 Sep 2019
5 min read
Save for later

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Savia Lobo
16 Sep 2019
5 min read
Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. GCC is an optimizer compiler that automatically optimizes code when compiling. GCC optimization phase involves three steps: Inter Procedural Analysis (IPA): This builds a callgraph and uses it to decide how to perform optimizations. GIMPLE Intra Procedural Optimizations: This performs several hardware-independent optimizations inside the function. RTL Intra Procedural Optimizations: This performs several hardware-dependent optimizations inside the function. As IPA collects information and decides how to optimize all functions, it then sends a function to the GIMPLE optimizer, which then sends the function to the RTL optimizer, and the final code is generated. This process repeats for every function in the code. Also Read: Oracle introduces patch series to add eBPF support for GCC Why a Parallel GCC? The team designed the parallel architecture to increase parallelism and reduce overhead. While IPA finishes its analysis, a number of threads equal to the number of logical processors are spawned to avoid scheduling overhead. Further, one of those thread inserts all analyzed functions into a threadsafe producer-consumer queue, which all threads are responsible to consume. Once a thread has finished processing one function, it queries for the next function available in the queue, until it finds an EMPTY token. When it happens, the thread should finalize as there are no more functions to be processed. This architecture is used to parallelize per-function GIMPLE Intra Process Optimizations and can be easily extended to also support RTL Intra Process Optimizations. This, however, does not cover IPA passes nor the per-language Front End analysis. Code refactoring to achieve Parallel GCC The team refactored several parts of the GCC middle-end code in the Parallel GCC project. The team says there are still many places where code refactoring is necessary for this project to succeed. “The original code required a single function to be optimized and outputted from GIMPLE to RTL without any possible change of what function is being compiled,” the researchers wrote in their official blog. Several structures in GCC were made per-thread or threadsafe, either being replicated by using the C11 thread notation, by allocating the data structure in the thread stack, or simply inserting locks. “One of the most tedious parts of the job was detecting making several global variables threadsafe, and they were the cause of most crashes in this project. Tools made for detecting data-races, such as Helgrind and DRD, were useful in the beginning but then showed its limitations as the project advanced. Several race conditions had a small window and did not happen when the compiler ran inside these tools. Therefore there is a need for better tools to help to find global variables or race conditions,” the blog mentions. Performance results The team compiled the file gimple-match.c, the biggest file in the GCC project. This file has more than 100,000 lines of code, with around 1700 functions, and almost no loops inside these functions. The computer used in this Benchmark had an Intel(R) Core(TM) i5-8250U CPU, with 8Gb of RAM. Therefore, this computer had a CPU with 4 cores with Hyperthreading, resulting in 8 virtual cores. The following are the results before and after Intra Procedural GIMPLE parallelization. Source: gcc.gnu.org The figure shows our results before and after Intra Procedural GIMPLE parallelization. In this figure, we can observe that the time elapsed, dropped from 7 seconds to around 4 seconds with 2 threads and around 3 seconds with 4 threads, resulting in a speedup of 1.72x and 2.52x, respectively. Here we can also see that using Hyperthreading did not impact the result. This result was used to estimate the improvement in RTL parallelization. Source: gcc.gnu.org The above results show when compared with the total compilation time, there is a small improvement of 10% when compiling the file. Source: gcc.gnu.org In this figure using the same approach as in the previous graph, users can estimate a speedup of 1.61x in GCC when it gets parallelized by using the speedup information obtained in GIMPLE. The team has suggested certain To-Dos for users wanting to implement parallel GCC: Find and fix all race conditions in GIMPLE. There are still random crashes when a code is compiled using the parallel option. Make this GCC compile itself. Make this GCC pass all tests in the testsuite. Add support to a multithread environment to Garbage Collector. Parallelize RTL part. This will improve our current results, as indicated in Results chapter. Parallelize IPA part. This can also improve the time during LTO compilations. Refactor all occurrences of thread by allocating these variables as soon as threads are started, or at a pass execution. GCC project members say that this project is under development and still has several bugs. A user on Hacker News writes, “I look forward to this. One that will be important for reproducible builds is having tests for non-determinism. Having nondeterministic code gen in a compiler is a source of frustration and despair and sucks to debug.” To know about the Parallel GCC in detail, read the official document. Other interesting news in programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE
Read more
  • 0
  • 0
  • 3698
article-image-textmate-2-0-the-text-editor-for-macos-releases
Amrata Joshi
16 Sep 2019
3 min read
Save for later

TextMate 2.0, the text editor for macOS releases

Amrata Joshi
16 Sep 2019
3 min read
Yesterday, the team behind TextMate released TextMate 2.0. They announced that the code for TextMate 2.0 is available via the GitHub repository. In 2012, the team had open-sourced the alpha version of TextMate 2.0.  One of the reasons why the company open-sourced the code for TextMate 2.0 was to indicate that Apple isn’t limiting user and developer freedom on the Mac platform. In this release, the qualifier suffix in the version string has been deprecated and even the 32 bit APIs have been replaced. This release comes with improved accessibility support. What’s new in TextMate 2.0? Makes swapping easy This release allows users to easily swap pieces of code. Makes search results convenient TextMate presents the results of the search in a way that users can switch between matches, extract matched text and preview desired replacements. Version control  Users can see changes in the file browser view and they can check the changes made to lines of code in the editor view. Improved commands  TextMate features WebKit as well as a dialog framework for Mac-native or HTML-based interfaces. Converting code pieces into snippets  Users can now turn commonly used pieces of text or code into snippets with transformations, placeholders, and more. Bundles Users can use bundles for customization and a number of different languages, workflows, markup systems, and more.  Macros  TextMate features Marcos that eliminates repetitive work.  This project was supposed to release years ago and now it has finally released that makes a lot of users happy.  A user commented on GitHub, “Thank you @sorbits. For making TextMate in the first place all those years ago. And thank you to everyone who has and continues to contribute to the ongoing development of TextMate as an open source project. ~13 years later and this is still the only text editor I use… all day every day.” Another user commented, “Immense thanks to all those involved over the years!” A user commented on HackerNews, “I have a lot of respect for Allan Odgaard. Something happened, and I don't want to speculate, that caused him to take a break from Textmate (version 2.0 was supposed to come out 9 or so years ago). Instead of abandoning the project he open sourced it and almost a decade later it is being released. Textmate is now my graphical Notepad on Mac, with VS Code being my IDE and vim my text editor. Thanks Allan.” It is still not clear as to what took TextMate 2.0 this long to get released. According to a few users on HackerNews, Allan Odgaard, the creator of TextMate wanted to improve the designs in TextMate 1 and he realised that it would require a lot of work to do the same. So he had to rewrite everything that might have taken away his time. Another comment reads, “As Allan was getting less feedback about the code he was working on, and less interaction overall from users, he became less motivated. As the TextMate 2 project dragged past its original timeline, both Allan and others in the community started to get discouraged. I would speculate he started to feel like more of the work was a chore rather than a joyful adventure.” To know more about this news, check out the release notes. Other interesting news in Programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! GitHub Package Registry gets proxy support for the npm registry  
Read more
  • 0
  • 0
  • 3811

article-image-introducing-ixy-a-simple-user-space-network-driver-written-in-high-level-languages-like-rust-go-and-c-among-others
Vincy Davis
13 Sep 2019
6 min read
Save for later

Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others 

Vincy Davis
13 Sep 2019
6 min read
Researchers Paul Emmerich et al have developed a new simple user space network driver called ixy. According to the researchers, ixy is an educational user space network driver for the Intel ixgbe family of 10 Gbit/s NICs. Its goal is to show that writing a super-fast network driver can be surprisingly simple in high-level languages like Rust, Go, Java and C# among others. Ixy has no dependencies, high speed, and a simple-to-use interface for applications to be built on it. The researchers have published their findings in a paper titled The Case for Writing Network Drivers in High-Level Programming Languages. Initially, the researchers implemented ixy in C and then successfully implemented the same driver in other high-level languages such as Rust, Go, C#, Java, OCaml, Haskell, Swift, Javascript, and Python. The researchers have found that the Rust driver executes 63% more instructions per packet but is only 4% slower than a reference C implementation. Go’s garbage collector keeps latencies below 100 µs even under heavy load. Network drivers written in C are vulnerable to security issues Drivers written in C are usually implemented in production-grade server, desktop, and mobile operating systems. Though C has features required for low-level systems programming and fine-grained control over the hardware, they have vulnerabilities for security as “they are exposed to the external world or serve as a barrier isolating untrusted virtual machines”. The paper states that the C code “accounts for 66% of the code in Linux, but 39 out of 40 security bugs related to memory safety found in Linux in 2017 are located in drivers. These bugs could have been prevented by using high-level languages for drivers.” Implementing Rust, Go and other high level languages in ixy network driver Rust: A lightweight Rust struct is allocated for each packet that contains metadata and owns the raw memory. The compiler enforces that the object has a single owner and only the owner can access the object. This prevents use-after-free bugs despite using a completely custom allocator. Rust is the only language evaluated in the case study that protects against use-after-free bugs and data races in memory buffers. Go: It has an external memory that is wrapped in slices to provide bounds checks. The atomic package in Go also indirectly provides memory barriers and volatile semantics thus offering stronger guarantees. C#: The researchers have implemented two external memories out of the many available. It offers a more direct way to work with raw memory by offering full support for pointers with no bounds checks and volatile memory access semantics. Java: The researchers have targeted OpenJDK 12 which offers a non-standard way to handle external memory via the sun.misc.Unsafe object that provides functions to read and write memory with volatile access semantics. OCaml: OCaml Bigarrays backed by external memory is used for DMA buffers and PCIe resources, the allocation is done via C helper functions. The Cstruct library from the OCaml allowed researchers to access data in the arrays in a structured way by parsing definitions similar to C struct definitions and generating code for the necessary accessor functions. Haskell: It is a compiled functional language with garbage collection. The necessary low-level memory access functions are available via the Foreign package. Memory allocation and mapping is available via System.Posix.Memory. Swift: Its memory is managed via automatic reference counting, i.e., the runtime keeps a reference count for each object and frees the object once it is no longer in use. It also offers all the features necessary to implement drivers. JavaScript: ArrayBuffers is used to wrap external memory in a safe way, these arrays can then be accessed as different integer types using TypedArrays, circumventing JavaScript’s restriction to floating-point numbers. Memory allocation and physical address translation is handled via a Node.js module in C. Python: For this driver, the implementation was not explicitly optimized for performance and meant as a simple prototyping environment for PCIe drivers and as an educational tool. The researchers have provided primitives for PCIe driver development in Python. Rust is found to be the prime candidate for safer network drivers After implementing the network driver ixy in all high-level languages, the researchers conclude that Rust is the prime candidate for safer drivers. The paper states, “Rust’s ownership based memory management provides more safety features than languages based on garbage collection here and it does so without affecting latency.” Other languages like Go and C# are also a suitable language if the system can cope with sub-millisecond latency spikes due to garbage collection. Other languages like Haskell and OCaml will also be more useful if their performance is less critical than having a safe and correct system. Though Rust performs better than C, it is 4% slower than the C driver. The reason behind is that Rust applies bounds checks while C does not. Another reason is that C does not require a wrapper object for DMA buffers. Image Source: Research paper Users have found the result of this high-level language implementation of network drivers quite interesting. https://twitter.com/matthewwarren/status/1172094036297048068 A Redditor comments, “Wow, Rust and Go performed quite well. Maybe writing drivers in them isn't that crazy” Many developers are also surprised to see the results of this case study, especially the performance of Go and Swift. A comment on Hacker News says, “The fact that Go is slower than C# really amazes me! Not long ago I switched from C# to Go on a project for performance reasons, but maybe I need to go back.” Another Redditor says, “Surprise me a bit that Swift implementation is well below expected. Being Swift a compiled native ARC language, I consider the code must be revised.” Interested readers can watch a video presentation by Paul Emmerich on ‘How to write PCIe drivers in Rust, go, C#, Swift, Haskell, and OCaml’. Also, you can find more implementation details in the research paper. Other News in Tech New memory usage optimizations implemented in V8 Lite can also benefit V8 Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack
Read more
  • 0
  • 0
  • 4284

article-image-gnome-3-34-releases-with-tab-pinning-improved-background-panel-custom-folders-and-more
Amrata Joshi
13 Sep 2019
4 min read
Save for later

GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more!

Amrata Joshi
13 Sep 2019
4 min read
Yesterday, GNOME 3.34 was released as the latest version of GNOME, the open-source desktop environment for Unix-like operating systems GNOME 3.34 comes 6 months after the release of GNOME 3.32, with features such as custom folders, tab pinning, improved background panel, Boxes, and much more. This release also offers support for more than 34 languages with at least 80 percent of strings translated. [box type="shadow" align="" class="" width=""]Fun Fact: GNOME 3.34 release is termed “Thessaloniki” in recognition of GNOME’s primary annual conference GUADEC which was held in Thessaloniki, Greece.[/box] What’s new in GNOME 3.34? Visual refreshers This release includes visual refreshes for a number of applications, including the desktop. The background selection settings have been redesigned and it is now easy to select custom backgrounds. Custom folders This release introduces custom folders in the application overview where users can simply drag an application icon on top of another for creating a folder. Once all the icons have been dragged out, folders are automatically removed. Tab pinning GNOME 3.34 brings tab pinning, so users can now pin their favorite tabs and save them in the tab list. Improved ad-blocking In this release, the ad-blocking feature has now been updated to use WebKit content filters.  Improved box workflow GNOME’s virtual and remote machine manager, ‘boxes’ has received a number of improvements. Separate dialogs are now being used while adding a remote connection or external broker. The existing virtual machines can now be booted from an attached CD/DVD image so users can now simulate dual-booting environments. Game state can now be saved GNOME’s retro gaming application, ‘Games’ can now support multiple save states per game. Users can save as many game state snapshots as they want. Users can also export the Save states and share them or move them between devices. Improved Background panel The Background panel has been redesigned and it shows a preview of the selected background that is in use under the desktop panel and lock screen. Users can now add custom backgrounds by using the “Add Picture… button”. Improvements in Music application Music can now watch tracked sources including the Music folder in the Home directory for new or changed files and will now get updated automatically. This release features gapless playback and comes with an updated layout where the album, artist and playlist views have now been updated with a better layout. https://youtu.be/qAjPRr5SGoY Updates for Developers and System Administrators Flaptak 1.4 releases in sync with GNOME 3.34 Flatpak 1.4 has been released in sync with GNOME 3.34. Flatpak is central to GNOME’s developer experience plans and is a cross-distribution, cross-desktop technology for application building and distribution. New updates to Builder In this release, Builder, a GNOME IDE has also received a number of new features; it can now run a program in a container via podman. Even the Git integration has now been moved to an out-of-process gnome-builder-git daemon.  Sysprof has been integrated with core platform libraries In this release, Sysprof, the GNOME instrumenting and system profiling utility has been improved; it has now been integrated with a number of core platform libraries such as GTK, GJS, and Mutter. New applications: Icon Library and Icon Preview  In this release, two new applications, Icon Library and Icon Preview have been released, Icon Library can be used for browsing symbolic icons and Icon Preview helps designers and developers in creating and testing new application icons.  Improved font rendering library Pango, the font rendering library now makes rendering text easier as developers will now have more advanced control over their text rendering options.  To know more about this news, check out the release notes. Other interesting news in Programming GitHub Package Registry gets proxy support for the npm registry Project management platform ClubHouse announces ‘Free Plan’ for up to 10 users and a new documentation tool The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE    
Read more
  • 0
  • 0
  • 2608
article-image-github-package-registry-gets-proxy-support-for-the-npm-registry
Bhagyashree R
12 Sep 2019
3 min read
Save for later

GitHub Package Registry gets proxy support for the npm registry

Bhagyashree R
12 Sep 2019
3 min read
Similar to the npm registry, RubyGems, and Docker Hub, GitHub also introduced its own package management service called GitHub Package Registry in May this year. After gathering community feedback, the team yesterday announced that the service now has proxy support for the primary npm registry. Also, the feature that created a release whenever you published a package is now removed. GitHub Package Registry and its features GitHub Package Registry allows you to host packages publicly or privately and code in one place. It provides you an end-to-end DevOps workflow consisting of your code, Continuous Integration (CI), and deployment solutions by integrating with GitHub APIs, GitHub Actions, and Webhooks. There are a number of features that GitHub Package Registry comes with. It inherits the permissions and visibility associated with the repository. This unified permissions management relieves organizations from maintaining a separate package registry and mirror permissions across systems. GitHub Package Registry gives you an insight into packages by providing data such as download statistics, version history, and more. It also supports multi-format packages so you can host multiple software package types in one registry. Read also: GitHub announces the beta version of GitHub Package Registry, its new package management service Proxy support for the primary npm registry With the npm.js proxy support, developers will be able to set the GitHub Package Registry as the source of their organization’s npm packages and the proxied source of packages from npm. To use this feature you just need to change OWNER to your GitHub organization or username in your project’s ‘.npmrc’ file. This will instruct npm to redirect all package requests to GitHub Package Registry, which will then serve any requests for a package in your account. In the future, the team plans to expand this feature to support other npm sources as well and add proxy support for other package types including Maven, NuGet, and Ruby. In order to prevent outages, they also plan to build a permanent cache on top of the proxy service. Another update is that the feature that automatically created releases when you published a package, is now removed. Explaining the reason, the team wrote in the announcement, “Many customers expressed that automatically creating a release for every package published was unexpected and undesirable and that it led to conflicts for repositories that were managing their releases closely already. As of today, publishing a package will no longer create an accompanying release.” The service is currently available in a limited public beta. GitHub is planning to make the service generally available via GitHub Universe later this year. Till then, it seeks for your feedback through the GitHub Package Registry survey. You can read the official announcement to know more in detail. Other news in programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more
Read more
  • 0
  • 0
  • 3955

article-image-project-management-platform-clubhouse-announces-free-plan-for-up-to-10-users-and-a-new-documentation-tool
Sugandha Lahoti
12 Sep 2019
3 min read
Save for later

Project management platform ClubHouse announces ‘Free Plan’ for up to 10 users and a new documentation tool

Sugandha Lahoti
12 Sep 2019
3 min read
ClubHouse, a popular project management platform has announced a free plan for smaller teams and a new collaborative documentation tool called ‘ClubHouse Write’. What is interesting is that although there are a number of competitors in the project management space, including the popular Atlassian Jira, few if any are offering it for free. ClubHouse provides a ‘Free plan’ for smaller teams of up to 10 users This no-cost option allows teams of up to 10 users to get unlimited access to ClubHouse core features such as core features Stories, Epics, Milestones for free.  These features show how everyday tasks of a team contribute towards a larger company goal. Additional features for support and additional security are available in Standard and Enterprise Plans for larger teams. All current small plan customers with 10 users or less, will be automatically transitioned over to the Free Plan. Organizations that previously paid an annual fee and have 10 or fewer users will be refunded the difference in price. Once a team adds the 11th user, they will transition to the current Standard Plan. Although Free Plan does not support Observers, if teams have Observers on a current Small Plan, they will be allowed to keep existing Observers. Users were quite excited about this new Free Plan, commenting about it on social media platforms. “You guys rock! One less expense to worry about it until I hit my stride. I'll gladly be paying for 11+ members when I can reach my goals,” reads a comment. Another says, “Thanks! I LOVE CLUBHOUSE! I would still gladly pay $10/mth maybe you should have made free for teams up to 5, but then kept small for 5-10 :)” ClubHouse Write, a collaborative documentation tool Along with today’s Free Plan announcement, Clubhouse has introduced Write, a real-time collaborative documentation tool. This product is currently in beta and will “make it easier for your software team to document, collaborate, and ideate together.” Software development teams will be able to collaborate, organize and comment on project documentation in real-time, for inter-team communication. Development teams can organize their Docs in multiple Collections. They can also choose to keep a Doc private or publish to the whole Workspace. Users will also be notified when there are new comments on followed Docs. In an interview with TechCrunch, Clubhouse discussed how the offerings will provide key competitive positioning against competitors such as Atlassian’s project management tool “Jira,”. Clubhouse Write, will compete head-on with Atlassian’s team collaboration product “Confluence.” Twitteratis were also quite excited about this new development. https://twitter.com/kkukshtel/status/1171829400951824384 https://twitter.com/kieranmoolchan/status/1171450725877997568 Other interesting news in Tech The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE. The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes. Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, Apple TV+, iPad, and more.
Read more
  • 0
  • 0
  • 2883