Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Application Development

279 Articles
article-image-microsoft-announces-its-support-for-bringing-exfat-in-the-linux-kernel-open-sources-technical-specs
Bhagyashree R
29 Aug 2019
3 min read
Save for later

Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs

Bhagyashree R
29 Aug 2019
3 min read
Yesterday, Microsoft announced that it supports the addition of its Extended File Allocation Table (exFAT) file system in the Linux kernel and publicly released its technical specifications. https://twitter.com/OpenAtMicrosoft/status/1166742237629308928 Launched in 2006, the exFAT file system is the successor to Microsoft's FAT and FAT32 file systems that are widely used in a majority of flash memory storage devices such as USB drives and SD cards. It uses 64-bits to describe file size and allows for clusters as large as 32MB. As per the specification, it was implemented with simplicity and extensibility in mind. John Gossman, Microsoft Distinguished Engineer, and Linux Foundation Board Member wrote in the announcement, “exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.” As exFAT was proprietary previously, mounting these flash drives and cards on Linux machines required installing additional software such as FUSE-based exFAT implementation. Supporting exFAT in the Linux kernel will provide users its full-featured implementation and can also be more performant as compared to the FUSE implementation. Also, its inclusion in OIN's Linux System Definition will allow its cross-licensing in a royalty-free manner. Microsoft shared that the exFAT code incorporated into the Linux kernel will be licensed under GPLv2. https://twitter.com/OpenAtMicrosoft/status/1166773276166828033 In addition to supporting exFAT in the Linux kernel, Microsoft also hopes that its specifications become a part of the Open Invention Network’s (OIN) Linux definition. Keith Bergelt, OIN's CEO, told ZDNet, "We're happy and heartened to see that Microsoft is continuing to support software freedom. They are giving up the patent levers to create revenue at the expense of the community. This is another step of Microsoft's transformation in showing it's truly committed to Linux and open source." The next edition of the Linux System Definition is expected to publish in the first quarter of 2020, post which any member of the OIN will be able to use exFAT without paying a patent royalty. The Linux Foundation also appreciated Microsoft's move to bring exFAT in the Linux kernel: https://twitter.com/linuxfoundation/status/1166744195199115264 Other developers also shared their excitement. A Hacker News user commented, “OMG, I can't believe we finally have a cross-platform read/write disk format. At last. No more Fuse. I just need to know when it will be available for my Raspberry Pi.” Read the official announcement by Microsoft to know more in detail. Microsoft Edge Beta is now ready for you to try Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms CERN plans to replace Microsoft-based programs with an affordable open-source software
Read more
  • 0
  • 0
  • 2647

article-image-introducing-activestate-state-tool-a-cli-tool-to-automate-dev-test-setups-workflows-share-secrets-and-manage-ad-hoc-tasks
Amrata Joshi
29 Aug 2019
3 min read
Save for later

Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks

Amrata Joshi
29 Aug 2019
3 min read
Today, the team at ActiveState, a software-based company known for building Perl, Python and Tcl runtime environments introduced the ActiveState Platform Command Line Interface (CLI), the State Tool. This new CLI tool aims at automating manual tasks such as the setup of development and test systems. With this tool, all instructions in the Readme can easily be reduced to a single command. How can the State Tool benefit the developers? Eases ad-hoc tasks The State Tool can address tasks that cause trouble to developers at project setup or environment setups that don’t work the first time. It also helps developers in managing dependencies, system libraries and other such tasks that affect productivity. These tasks usually end up consuming developers’ coding time. The State Tool can be used to automate all of the ad hoc tasks that developers come across on a daily basis.  Deployment of runtime environment With this tool, developers can now deploy a consistent runtime environment into a virtual environment on their machine and across CI/CD systems with a single command. Sharing secrets and cross-platform scripts Developers can now centrally create secrets that can be securely shared among team members without the need of using a password manager, email, or Slack. They can create and share cross-platform scripts that include secrets for starting off the builds and run tests as well as simplifying and speeding up common development tasks. Developers can incorporate their secrets in the scripts by simply referencing their names. Automation of workflows All the workflows that developers handle can now get centrally automated with this tool. Jeff Rouse, vice president, product management, said in a statement, “Developers are a hardy bunch. They suffer through a thousand annoyances at project startup/restart time, but soldier on anyway. It’s just the way things have always been done. With the State Tool, it doesn’t have to stay that way. The State Tool addresses all the hidden costs in a project that sap developer productivity. This includes automating environment setup to secrets sharing, and even automating the day to day scripts that everyone counts on to get their jobs done. Developers can finally stop solving the same annoying problems over and over again, and just rely on the State Tool so they can spend more time coding.”   To know more about this news, check out the official page.  Podcasting with Linux Command Line Tools and Audacity GitHub’s ‘Hub’ command-line tool makes using git easier Command-Line Tools  
Read more
  • 0
  • 0
  • 2700

article-image-qt-introduces-qt-for-mcus-a-graphics-toolkit-for-creating-a-fluid-user-interface-on-microcontrollers
Vincy Davis
22 Aug 2019
2 min read
Save for later

Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers

Vincy Davis
22 Aug 2019
2 min read
Yesterday, the Qt team introduced a new graphics toolkit called Qt for MCUs for creating fluid user interfaces (UIs) on cost-effective microcontrollers (MCUs). The toolkit will enable new and existing users to take advantage of the existing Qt tools and libraries used for Device Creation, thus enabling companies to provide better user experience.  Petteri Holländer, the Senior Vice President of Product Management at Qt said, “With the introduction of Qt for MCUs, customers can now use Qt for almost any software project they’re working on, regardless of target – with the added convenience of using just one technology framework and toolset.” He further adds, “This means that both existing and new Qt customers can pursue the many business growth opportunities offered by connected devices – across a wide and diverse range of industries.” Qt for MCUs utilizes the Qt Modeling Language (QML) and the developer-designing tools for constructing a fast and customized Qt application. “With the frontend defined in declarative QML and the business logic implemented in C/C++, the end result is a fluid graphical UI application running on microcontrollers,”  says the Qt team. Key benefits offered by Qt for MCUs Existing skill sets can be reused for Qt for microcontrollers Same technology can be used in high-end and mass market devices, thus yielding low maintenance cost No compromise on graphics performance, hence reduced hardware costs Users can upgrade to the cross-platform graphical toolkit from a legacy solution Check out the Qt for MCUs website for more information. Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more
Read more
  • 0
  • 0
  • 2470
Banner background image

article-image-git-2-23-released-with-two-new-commands-git-switch-and-git-restore-a-new-tutorial-and-much-more
Amrata Joshi
19 Aug 2019
4 min read
Save for later

Git 2.23 released with two new commands ‘git switch’ and ‘git restore’, a new tutorial, and much more!

Amrata Joshi
19 Aug 2019
4 min read
Last week, the team behind Git released Git 2.23 that comes with experimental commands, backward compatibility and much more. This release has received contributions from over 77 contributors out of which 26 were new. What’s new in Git 2.23? Experimental commands This release comes with a new pair of experimental commands, git switch and git restore for providing a better interface for the git checkout.  “Two new commands "git switch" and "git restore" are introduced to split "checking out a branch to work on advancing its history" and "checking out paths out of the index and/or a tree-ish to work on advancing the current history" out of the single "git checkout" command,” the official mail thread reads.  Git checkout can be used to change branches with git checkout <branch>. In case if the user doesn’t want to switch branches, git checkout can be used to change individual files, too. These new commands aim to separate the responsibilities of git checkout into two narrower categories that is operations, which change branches and operations that change files.  Backward compatibility  The "--base" option of "format-patch" is now compatible with "git patch-id --stable".  Git fast-export/import pair The "git fast-export/import" pair will be now used to handle commits with log messages in encoding other than UTF-8. git clone --recurse-submodules "git clone --recurse-submodules" has now learned to set up the submodules for ignoring commit object names that are recorded in the superproject gitlink. git diff/grep The pattern "git diff/grep" that is used for extracting funcname and words boundary for Rust has now been added. git fetch" and "git pull The commands "git fetch" and "git pull" are used to report when a fetch results in non-fast-forward updates that lets the user notice unusual situation.    git status With this release, the extra blank lines in "git status" output have been reduced. Developer support This release comes with developer support for emulating unsatisfied prerequisites in tests for ensuring that the remainder of the tests succeeds when tests with prerequisites are skipped. A new tutorial for git-core developers This release comes with a new tutorial that target aspiring git-core developers. This tutorial demonstrates end-to-end workflow of creating a change to the Git tree, for sending it for review, as well as making changes that are based on comments. Bug fixes in Git 2.23 In the earlier version, "git worktree add" used to fail when another worktree that was connected to the same repository was corrupt. This issue has been corrected in this release. An issue with the file descriptor has been fixed. This release comes with an updated parameter validation. The code for parsing scaled numbers out of configuration files has been made more robust and easier to follow with this release. Few users seem to be happy about the new changes made, a user commented on HackerNews, “It's nice to hear that there appears to be progress being made in making git's tooling nicer and more consistent. Git's model itself is pretty simple, but the command line tools for working with it aren't and I feel that this fuels most of the "Git is hard" complaints.” Few others are still skeptical about the new commands, another user commented, “On the one hand I'm happy on the new "switch" and "restore" commands. On the other hand, I wonder if they truly add any value other than the semantic distinction of functions otherwise present in checkout.” To know more about this news in detail, read the official blog post on GitHub. GitHub has blocked an Iranian software developer’s account GitHub services experienced a 41-minute disruption yesterday iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows
Read more
  • 0
  • 0
  • 10122

article-image-rails-6-releases-with-action-mailbox-parallel-testing-action-text-and-more
Vincy Davis
19 Aug 2019
4 min read
Save for later

Rails 6 releases with Action Mailbox, Parallel Testing, Action Text, and more!

Vincy Davis
19 Aug 2019
4 min read
After a long wait, the stable version of Rails 6 is finally available for users. Five days ago, David Hansonn, the Ruby on Rails creator, released the final version, which has many new major features such as Action Mailbox, Action Text, Parallel Testing, and Action Cable Testing. Rails 6 also has many minor changes, fixes, and upgrades in Railties, Action Pack, Action View, and more. This version also requires Ruby 2.5.0+ for running codes. Hansonn says, “While we took a little while longer with the final version than expected, the time was spent vetting that Rails 6 is solid.” He also informs that GitHub, Shopify, and Basecamp and other companies and applications have already been using the pre-release version of Rails 6 in their production. https://twitter.com/dhh/status/1162426045405921282 Read More: The first release candidate of Rails 6.0.0 is now out! Major new features in Rails 6 Action Mailbox This new framework can direct incoming emails to controller like mailboxes, such that user can use it for processing in Rails. Action Mailbox ships with access to Amazon SES, Mailgun, Mandrill, Postmark, and SendGrid. It is also possible to control inbound mails via the built-in Exim, Postfix, and Qmail ingresses. These inbound emails are transformed to InboundEmail records using Active Record. They can also be routed asynchronously using Active Job to one or several dedicated mailboxes. To know more about the basics of Action Mailbox, head over to action mailbox basics. Action Text Action Text includes the Trix editor that can handle formatting, links, quotes, lists, embedded images, and galleries. It also provides rich text content which is saved in the RichText model associated with the existing Active Record model in the chosen application. To get an overview on Action Mailbox, read the action text overview page. Parallel Testing Parallel Testing allows users to parallelize their test suite, thus reducing the time required to run the entire test suite. The forking process is the default method used to do parallel testing. To learn how to do parallel testing with processes, check out the parallel testing page. Action Cable Testing Action Cable testing tools allows users to test their Action Cable functionality at the connections, channels and broadcast levels. For information on connection test case and channel test case, head over to the testing action cable. Other changes in Rails 6 Railties Railties handles the bootstrapping process in a Rails application and also provides the Rails generators core. Multiple database support for rails db:migrate:status command has been added. A new guard has been introduced to protect against DNS rebinding attacks. Action Pack The Action Pack framework is used for handling and responding to web requests. It also provides mechanisms for routing, controllers, and more. Rails 6 allows the use of #rescue_from for handling parameter parsing errors. A new middleware ActionDispatch::HostAuthorization has been added to guard against DNS rebinding attacks. Developers are excited to use the new features introduced in Rails 6, especially the parallel testing feature. A user on Hacker News comments, “Wow, Multiple DB and Parallel Testing is super productive. I hope framework maintainers of other language community should also get inspired by these features.” Another comment reads, “The multiple database support is really exciting. Anything that makes it easier to push database reads to replicas is huge.” Another user says, “Congrats to the Rails team ! I can't praise Rails enough. Such a huge boost in productivity for prototyping or full production app. I use it for both work or side project. I can't imagine a world without it. Long live Rails!” Twitteratis are also praising the Rails 6 release. https://twitter.com/tenderlove/status/1162566272271339521 https://twitter.com/AviShastry/status/1162755780229107713 https://twitter.com/excid3/status/1162426797046284288 To know about the minor changes, fixes, and upgrades in Rails 6, check out the Ruby on Rails 6.0 Release Notes. Head over to the Ruby blog for more details about the release. GitLab considers moving to a single Rails codebase by combining the two existing repositories Rails 6 will be shipping source maps by default in production Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing
Read more
  • 0
  • 0
  • 3599

article-image-gnu-radio-3-8-0-0-releases-with-new-dependencies-python-2-and-3-compatibility-and-much-more
Amrata Joshi
13 Aug 2019
2 min read
Save for later

GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more!

Amrata Joshi
13 Aug 2019
2 min read
Last week, the team behind GNU Radio announced the release of GNU Radio 3.8, a free and open-source software development toolkit. GNU Radio 3.8.0.0 comes with a few major changes and deprecations. Major changes in GNU Radio 3.8.0.0 Dependencies With this release, new dependencies have been introduced including MPIR/GMP, Qt5, codec2, gsm. The team has removed few of the dependencies including libusb, Qt4, and CppUnit Python compatibility This release is Python 2 and Python 3 compatible. Also, GNU Radio 3.8 is going to be the last Py2k-compatible release series. Gengen got replaced Gengen (GENerator GENerator) a tool that generates a text generator got replaced by templates. gnuradio-runtime The team has reworked on fractional tag time handling which is in the context of resamplers C++ generation In this release, C++ generation has been introduced as an option. gr-utils The gr_modtool has also improved now. Some deprecations in GNU Radio 3.8  Modules Modules gr-comedi, gr-fcd and gr-wxgui have been removed. Gr-comedi Gr-comedi has been removed as it had 0 active code contributions in the 3.7 lifecycle. gr-fcd Gr-fcd is getting removed as it is currently untestable by the CI and as there were no code contributions. It seems few users are excited to experiment with GNU Radio 3.8 in the near future. A user commented on HackerNews, “GNU Radio is one of those examples of free software being hyper-niche yet super successful. It's something I want to start playing with in the near future.” To know more about this news, check out the official post by GNURadio. GNU C Library version 2.30 releases with POSIX-proposed functions, support for Unicode 12.1.0, new Linux functions and more! GNU APL 1.8 releases with bug fixes, FFT, GTK, RE and more Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port    
Read more
  • 0
  • 0
  • 3930
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-telegram-introduces-new-features-slow-mode-switch-custom-titles-comments-widget-and-much-more
Amrata Joshi
12 Aug 2019
3 min read
Save for later

Telegram introduces new features: Slow mode switch, custom titles, comments widget and much more!

Amrata Joshi
12 Aug 2019
3 min read
Last week, the team at Telegram, the messaging app, introduced new features for group admins and users. These features include Slow Mode switch, custom titles, features for videos, and much more. What’s new in Telegram? Admins get more authority to manage the group  Slow Mode switch The Slow Mode feature will allow the group admin to control how often a member could send a message in the group. Once the admin enables Slow Mode in a group, the users will be able to send one message per the interval they choose. Also, a timer will be shown to the users which would tell them how long they need to wait before sending their next message. This feature is introduced to make group conversations more orderly and also to raise the value of each individual message. The official post suggests admins to “Keep it (Slow Mode feature) on permanently, or toggle as necessary to throttle rush hour traffic.” Image Source: Telegram Custom titles Group owners will now be able to set custom titles for admins like ‘Meme Queen’, ‘Spam Hammer’ or ‘El Duderino’. These custom titles will be shown with the default admin labels. For adding a custom title, users need to edit admin's rights in Group Settings. Image Source: Telegram Silent messages Telegram has now planned to bring more peace of mind to its users by introducing a feature that allows its users to message friends without any sound. Users just have to hold the send button to have any message or media delivered. New feature for videos Videos shared on Telegram now show thumbnail previews as users scroll through the videos to help them find the moment they were looking for. If users add a timestamp like 0:45 to a video caption, it will be automatically highlighted as a link. Also, if a user taps on a timestamp the video will play from the right spot.  Comments widget The team has come up with a new tool called Comments.App for users to comment on channel posts. With the help of the comments widget, users can log in with just two taps and comment with text and photos, as well as like, dislike and further reply to comments from others. Few users are excited about this news and appreciate Telegram over Whatsapp because it provides by default end to end encryption. A user commented on HackerNews, “I really like Telegram. Only end-to-end encryption by default and in group chats would make it perfect.” To know more about this news, check out the official post by Telegram. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests Hacker destroys Iranian cyber-espionage data; leaks source code of APT34’s hacking tools on Telegram Trick or a treat: Telegram announces its new ‘delete feature’ that deletes messages on both the ends
Read more
  • 0
  • 0
  • 7221

article-image-ubuntu-19-10-will-now-support-experimental-zfs-root-file-system-install-option
Vincy Davis
12 Aug 2019
4 min read
Save for later

Ubuntu 19.10 will now support experimental ZFS root file-system install option

Vincy Davis
12 Aug 2019
4 min read
Last week, Ubuntu announced that the upcoming Ubuntu version 19.10 will support ZFS as a root file system, and should be used as an ‘experimental’ installer. The ZFS support will enable an easy to use interface, provide automated operations and offer high flexibility to Ubuntu users. Initially, Ubuntu 19.10 will be supported on desktop only, however, the layout has been kept extensible for servers, later on. Ubuntu has also warned users not to use ZFS on production systems yet; users can use it for experimental purposes and provide feedback. Ubuntu develops a new user space daemon - ‘zsys’ In order to make the basic and advanced concepts of ZFS easily accessible and transparent to users, Ubuntu is developing a new user space daemon, called zsys, which is a ZFS system tool. It will allow multiple ZFS systems to run in parallel on the same machine, and have other advantages like automated snapshots, separating user data from system and persistent data to manage complex zfs dataset layouts. Ubuntu is designing the system in such a way that people with little knowledge of ZFS will also be able to use it flexibly. Zsys’s cooperation with GRUB and ZFS on Linux initramfs will yield advanced features which will be made official by Ubuntu, later on. Users can check out the current progress and what’s next with zsys on the Ubuntu projects Github page. Progress update of Ubuntu 19.10 ZFS has already been shipped on Linux version 0.8.1. It supports features like native encryption, trimming support, checkpoints, raw encrypted zfs transmissions, project accounting and quota and many performance enhancements. Some post-release upstream fixes has been backported, to provide a better user experience and increase reliability. A new support has been added in the GNU GRUB menu. All existing ZFS on root user can enjoy these benefits, as soon as version Ubuntu 19.10 is updated. The post states that “We still have a lot to tackle and 19.10 will be only the beginning of the journey. However, the path forward is exciting and we hope to be able to bring something fresh and unique to ZFS users.” Users are very happy with Ubuntu 19.10 supporting ZFS. https://twitter.com/jtteag/status/1159143800821952514 A user on Hacker News comments, “Having been a ZFS fan since the twilight of OpenSolaris, I'm very glad to see ZoL taking off. Rolling it into Ubuntu and making it officially supported was a great move - after some frustration with trying to run ZFS on a CentOS box and having it occasionally break after a kernel update, having it easily available on Ubuntu was like a breath of fresh air. Having it readily available as a root filesystem, and having TRIM support at long last, is great news.” While few users are not happy with Ubuntu 19.10 supporting ZFS due to its high maintenance. A Redditor says, “I'm a big fan of Ubuntu, use it on one of my own machines and recommend it to people. But almost every time they have decided to go it alone and make something a unique selling point it has backfired (Upstart, Mir, Unity, bzr, CouchDB, Ubuntu one). No other mainstream distro is going to adopt ZFS. Probably ubuntu will drop it in a few years when they realize they can't carry the maintenance burden. If you use ZFS for your file system then you won't be able to use standard recovery tools or access it from a dual boot. You won't be able to revert back to and older ubuntu version. You won't be able to install upstream kernels.” For more details, head over to the Ubuntu blog. Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more
Read more
  • 0
  • 0
  • 3326

article-image-gnu-c-library-version-2-30-releases-with-posix-proposed-functions-support-for-unicode-12-1-0-new-linux-functions-and-more
Vincy Davis
05 Aug 2019
3 min read
Save for later

GNU C Library version 2.30 releases with POSIX-proposed functions, support for Unicode 12.1.0, new Linux functions and more!

Vincy Davis
05 Aug 2019
3 min read
Last week, the GNU C Library version 2.30 was made available to all users. The major highlights of this release include new POSIX (Portable Operating System Interface)-proposed functions, support for Unicode 12.1.0, support to --preload argument to preload shared objects, addition of new functions such as getdents64, gettid, and tgkill to Linux and more. The GNU C Library is used in the GNU systems, GNU/Linux systems as well as on other systems that use Linux as the kernel. It is a portable and high performance C library. Major new features in GNU C Library version 2.30 New POSIX-proposed pthread_cond_clockwait, pthread_mutex_clocklock, pthread_rwlock_clockrdlock, pthread_rwlock_clockwrlock and sem_clockwait functions have been introduced in GNU C Library version 2.30. All the functions allow waiting against CLOCK_MONOTONIC and CLOCK_REALTIME.  The GNU C Library version 2.30 has an added support of Unicode 12.1.0. Character encoding, character type info, and transliteration tables have also been updated to Unicode 12.1.0. The dynamic linker will now accept the --preload argument to preload shared objects along with the LD_PRELOAD environment variable. The getdents64, gettid, and tgkill functions have been added on Linux. Memory allocation functions malloc, calloc, realloc, reallocarray, valloc, pvalloc, memalign, and posix_memalign will need object size smaller than PTRDIFF_MAX. This will help the memory allocation functions to avoid potential undefined behavior with pointer subtraction within the allocated object, which caused ptrdiff_t type overflow. Deprecated features influencing compatibility Functions like clock_gettime, clock_getres, clock_settime, clock_getcpuclockid, clock_nanosleep have been removed from the librt library for new applications. The outdated  XSI STREAMS header files <stropts.h> and <sys/stropts.h> and the RES_INSECURE1 and RES_INSECURE2 option flags for the DNS stub have been abolished. The support for “inet6” option in /etc/resolv.conf and the RES_USE_INET6 resolver flag have been eliminated. The Linux-specific <sys/sysctl.h> header and the sysctl function have been removed from the GNU C Library version 2.30 and also will not be present in the future versions of glibc. The getentropy function can be used for obtaining random bits. Bug Fixes in GNU C Library version 2.30 gettid() to have a wrapper in libc nftw() does not return dangling symlink's inode in libc mtrace hangs when MALLOC_TRACE is defined in malloc memusagestat is built using system C library in malloc libpthread IFUNC resolver for vfork can lead to crash in nptl These are select few updates. For more information, you may go through the libc sourceware page. Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port Google proposes a libc in LLVM, Rich Felker of musl libc thinks it’s a very bad idea GNU APL 1.8 releases with bug fixes, FFT, GTK, RE and more
Read more
  • 0
  • 0
  • 3030

article-image-macos-terminal-emulator-iterm2-3-3-is-here-with-new-python-scripting-api-a-scriptable-status-bar-minimal-theme-and-more
Vincy Davis
02 Aug 2019
4 min read
Save for later

MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more

Vincy Davis
02 Aug 2019
4 min read
Yesterday, the team behind iTerm2, the GPL-licensed terminal emulator for macOS, announced the release of iTerm2 3.3.0. It is a major release with many new features such as the new Python scripting API, a new scriptable status bar, two new themes, and more. iTerm2 is a successor to iTerm and works on all macOS. It is an open source replacement for Apple's Terminal and is highly customizable as comes with a lot of useful features. Major highlights in iTerm2 3.3.0 A new Python scripting API which can control iTerm2 and extend its behavior has been added. It allows users to write Python scripts easily, thus enabling them to do extensive configuration and customization in iTerm2 3.3.0. A new scriptable status bar has been added with 13 built-in configurable components. iTerm2 3.3.0 comes with two new themes. The first theme is called as Minimal and it helps reducing visual cluster. The second theme can move tabs into the title bar, thus saving space while maintaining the general appearance of a macOS app and is called Compact. Other new features in iTerm2 3.3.0 The session, tab and window titles have been given a new appearance to make it more flexible and comprehensible. It is now possible to configure these titles separately and also to select what type of information it shows per profile. These titles are integrated with the new Python scripting API. The tabs title has new icons, which either indicates a running app or a fixed icon per profile. A new tool belt called ‘Actions’ has been introduced in iTerm2 3.3.0. It provides shortcuts  to frequent actions like sending a snippet of a text. A new utility ‘it2git’ which allows the git status bar component to show git state on a remote host, has been added. New support for crossed-out text (SGR 9) and automatically restarting a session when it ends has also been added in iTerm2 3.3.0. Other Improvements in iTerm2 3.3.0 Many visual improvements Updated app icon Various pages of preferences have been rearranged to make it more visually appealing The password manager can be used to enter a password securely A new option to log Automatic Profile Switching messages to the scripting console has been added The long scrollback history’s performance has been improved Users love the new features in iTerm2 3.3.0 release, specially the new Python API, the scriptable status bar and the new Minimal mode. https://twitter.com/lambdanerd/status/1157004396808552448 https://twitter.com/alloydwhitlock/status/1156962293760036865 https://twitter.com/josephcs/status/1157193431162036224 https://twitter.com/dump/status/1156900168127713280 A user on Hacker News comments, “First off, wow love the status bar idea.” Another user on Hacker News says “Kudos to Mr. Nachman on continuing to develop a terrific piece of macOS software! I've been running the 3.3 betas for a while and some of the new functionality is really great. Exporting a recording of a terminal session from the "Instant Replay" panel is very handy!” Few users are not impressed with iTerm2 3.3.0 features and are comparing it with the Terminal app. A comment on Hacker News reads, “I like having options but wouldn’t recommend iTerm. Apple’s Terminal.app is more performant rendering text and more responsive to input while admittedly having somewhat less unnecessary features. In fact, iTerm is one of the slowest terminals out there! iTerm used to have a lot of really compelling stuff that was missing from the official terminal like tabs, etc that made straying away from the canonical terminal app worth it but most of them eventually made their way to Terminal.app so nowadays it’s mostly just fluff.” For the full list of improvements in iTerm2 3.3.0, visit the iTerm2 changelog page. Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more! WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra
Read more
  • 0
  • 0
  • 6284
article-image-electron-6-0-releases-with-improved-promise-support-native-touch-id-authentication-support-and-more
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more

Bhagyashree R
01 Aug 2019
3 min read
On Tuesday, the team behind Electron, the web framework for building desktop apps, announced the release of Electron 6.0. It comes with further improvement in the ‘Promise’ support, native Touch ID authentication support for macOS, native emoji and color picker methods, and more. This release is upgraded to Chrome 76, Node.js 12.4.0, and V8 7.6. https://twitter.com/electronjs/status/1156273653635407872 Promisification of functions continue Starting from Electron 5.0, the team introduced a process called “promisification” in which callback-based functions are converted to return ‘Promises’. In Electron 6.0, the team has converted 26 functions to return Promises and also supported callback-based invocation. Among these “promisified” functions are ‘contentTracing.getCategories()’, ‘cookies.flushStore()’, ‘dialog.showCertificateTrustDialog()’, and more. Three new variants of the Helper app The hardened runtime was introduced to prevent exploits like code injection, DLL hijacking, and process memory space tampering. However, to serve the purpose it does restricts things like writable-executable memory and loading code signed by a different Team ID.  If your app relies on such functionalities, you can add an entitlement to disable individual protection. To enable a hardened runtime in an Electron app, special code signing entitlements were granted to Electron Helper. Starting from Electron 6.0, three new variants of the Helper app are added to keep these granted entitlements scoped to the process types that require them. These are ‘Electron Helper (Renderer).app)’, ‘(Electron Helper (GPU).app)’, and ‘(Electron Helper (Plugin).app)’. Developers using ‘electron-osx-sign’ to codesign their Electron app, do not have to make any changes to their build logic. But if you are using custom scripts instead, then you will need to ensure that the three Helper apps are correctly codesigned. To correctly package your application with these new helpers, use ‘electron-packager@14.0.4’ or higher. Miscellaneous changes to Electron 6.0 Electron 6.0 brings native Touch ID authentication support for macOS. There are now native emoji and color picker methods for Windows and macOS. The ‘chrome.runtime.getManifest’ API for Chrome extensions is added that returns details about the app or extension from the manifest. The ‘<webview>.getWebContentsId()’ method is added that allows getting the WebContents ID of WebViews when the remote module is disabled. Support is added for the Chrome extension content script option ‘all_frames’. This option allows an extension to specify whether JS and CSS files should be injected into all frames or only into the topmost frame in a tab. With Electron 6.0, the team has laid out the groundwork for a future requirement, which says that all native Node modules loaded in the renderer process will be either N-API or Context Aware. This is done for faster performance, better security, and reduced maintenance workload. Along with the release announcement, the team also announced the end of life of Electron 3.x.y and has recommended upgrading to a newer version of Electron. To know all the new features in Electron 6.0, check out the official announcement. Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 3783

article-image-cern-plans-to-replace-microsoft-based-programs-with-an-affordable-open-source-software
Amrata Joshi
26 Jul 2019
3 min read
Save for later

CERN plans to replace Microsoft-based programs with an affordable open-source software

Amrata Joshi
26 Jul 2019
3 min read
Last month, CERN, one of the leading scientific research organizations planned to stop using Microsoft-based programs to look out for affordable open-source software. For the past 20 years, CERN has been using Microsoft products at a discounted "academic institution" rate. Things changed in March when its previous contract was ending and Microsoft revoked CERN's academic status and as per a CERN’s blog post, under the new contract, licensing costs have been increased.  Meanwhile, CERN is now focusing on its year-old project known as, Microsoft Alternatives project (MAlt) and plans to migrate to open-source software. MAlt’s principles of engagement are: delivering the same service to every category of CERN personnel, avoiding vendor lock-in for decreasing risk and dependency, keeping hands-on data and addressing the common use-cases. The official post reads, “The Microsoft Alternatives project (MAlt) started a year ago to mitigate anticipated software license fee increases. MAlt’s objective is to put us back in control using open software. It is now time to present more widely this project and to explain how it will shape our computing environment.” https://twitter.com/Razican/status/1138818892825055233 This summer, MAlt will start with a pilot mail service for the IT department and volunteers. CERN plans to migrate all of its staff to the new mail service and also move the Skype for Business clients and analogue phones to a softphone pilot. Microsoft agreed to increase CERN's fees over a ten-year period so that the institution could adapt but it was still unsustainable as per CERN. Emmanuel Ormancey, a CERN system analyst, wrote in a blog post, “Although CERN has negotiated a ramp-up profile over ten years to give the necessary time to adapt, such costs are not sustainable.” Considering CERN’s collaborative nature and its wide community, a large number of licenses are required for delivering the services to everyone. The costs per product becomes unaffordable when traditional business models on a per-user basis are applied. It got unaffordable for CERN to go for commercial software licenses with a per-user fee structure. While many other public research institutions have previously been affected by this new licensing structure.  While few users still think Microsoft was a better choice and are on the point that it would be difficult for CERN to migrate. A user commented on HackerNews, “Migrating away from Microsoft won't be easy. Despite high licensing costs, Windows, AD and Exchange are still great solutions with millions of people familiar with them, good documentation and support.” Few others are happy about CERN’s decision to support open source. Another user commented, “It is awesome to see how CERN is supporting open source. They have been long time users of our open core GitLab with 12,000 users https://about.gitlab.com/customers/cern/” To know more about this news, check out the official post. Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Why are experts worried about Microsoft’s billion dollar bet in OpenAI’s AGI pipe dream? Ex-Microsoft employee arrested for stealing over $10M from store credits using a test account
Read more
  • 0
  • 0
  • 3433

article-image-typescript-3-6-beta-is-now-available
Amrata Joshi
23 Jul 2019
2 min read
Save for later

TypeScript 3.6 beta is now available!

Amrata Joshi
23 Jul 2019
2 min read
Last week, the team behind TypeScript announced the availability of TypeScript 3.6 Beta. The full release of TypeScript 3.6 is scheduled for the end of the next month with a Release Candidate coming a few weeks prior.  What’s new in TypeScript 3.6? Stricter checking TypeScript 3.6 comes with stricter checking for iterators and generator functions. The earlier versions didn’t let users of generators differentiate whether a value was yielded or returned from a generator. With TypeScript 3.6, users can narrow down values from iterators while dealing with them. Simpler emit The emit for constructs like for/of loops and array spreads can be a bit heavy so TypeScript opts for a simpler emit by default that supports array types, and helps in iterating on other types using the --downlevelIteration flag. With this flag, the emitted code is more accurate, but is larger. Semicolon-aware code edits Older versions of TypeScript added semicolons to the end of every statement which was not appreciated by many users as it didn’t go along with their style guidelines. TypeScript 3.6 can easily detect if a file uses semicolons while applying edits and if a file lack semicolons, TypeScript doesn’t add one. DOM updates Following are a few of the declarations that have been removed or changed within lib.dom.d.ts: Instead of GlobalFetch, WindowOrWorkerGlobalScope is used. Non-standard properties on Navigator no more exist. webgl or webgl2 is used instead of experimental-webgl context. To know more about this news, check out the official post.  Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more      
Read more
  • 0
  • 0
  • 1909
article-image-github-services-were-down-for-4-hours-yesterday
Bhagyashree R
23 Jul 2019
4 min read
Save for later

GitHub services experienced a 41-minute disruption yesterday

Bhagyashree R
23 Jul 2019
4 min read
Update: Yesterday the GitHub team in a blog post stated what they have uncovered in their initial investigation, “On Monday at 3:46 pm UTC, several services on GitHub.com experienced a 41-minute disruption, and as a result, some services were degraded for a longer period. Our initial investigation suggests a logic error introduced into our deployment pipeline manifested during a subsequent and unrelated deployment of the GitHub.com website. This chain of events destabilized a number of internal systems, complicated our recovery efforts, and resulted in an interruption of service.” It was not a very productive Monday for many developers when GitHub started showing 500 and 422 error code on their repositories. This was because several services on GitHub were down yesterday from around 15:46 UTC for 41 minutes. Soon GitHub engineers began their investigation and all the services were back to normal by 19:47 UTC. https://twitter.com/githubstatus/status/1153391172167114752 The outage affected GitHub services including Git operations, API requests, Gist, among others. The experiences that developers reported were quite inconsistent. Some developers said that though they were able to open the main repo page, they could not see commit log or PRs. Others reported that all the git commands that required interaction with GitHub’s remotes failed. A developer commented on Hacker News, “Git is fine, and the outage does not affect you and your team if you already have the source tree anywhere. What it does affect is the ability to do code reviews, work with issues, maybe even do releases. All the non-DVCS stuff.” GitHub is yet to share the cause and impact of the downtime. However, developers took to different discussion forums to share what they think the reason behind GitHub outage could be. While some speculated that it might be its increasing user base, others believed it was because GitHub might be still moving “stuff to Azure after the acquisition.” Developers also discussed what steps they can take so that such outages do not affect their workflow in the future. One developer suggested not to rely on a single point of failure by setting two different URLs for the same remote so that a single push command will push to both. You can do something like this, a developer suggested: git remote set-url --add --push origin git@github.com:Foo/bar.git git remote set-url --add --push origin git@gitlab.com:Foo/bar.git Another developer suggested, “I highly recommend running at least a local, self-hosted git mirror at any tech company, just in these cases. Gitolite + cgit is extremely low maintenance, especially if you host them next to your other production services. Not to mention, if you get the self-hosted route you can use Gerrit, which is still miles better for code review than GitHub, Gitlab, bitbucket and co.” Others joked that this was a good opportunity to take a few hours of break and relax. “This is the perfect time to take a break. Kick back, have a coffee, contemplate your life choices. That commit can wait, that PR (i was about to merge) can wait too. It's not the end of the world,” a developer commented. Lately, we are seeing many cases of outages. Earlier this month, almost all of Apple’s iCloud services were down for some users. On July 2, Cloudflare suffered a major outage due to a massive spike in CPU utilization in the network. Last month, Google Calendar was down for nearly three hours around the world. In May, Facebook and its family of apps Whatsapp, Messenger, and Instagram faced another outage in a row. Last year, Github faced issues due to a failure in its data storage system which left the site broken for a complete day. Several developers took to Twitter to kill their time and vent out frustration: https://twitter.com/jameskbride/status/1153332862587944960 https://twitter.com/BobString/status/1153329356284055552 https://twitter.com/pikesley/status/1153332278774439941 https://twitter.com/francesc/status/1153336190390550528 Cloudflare RCA: Major outage was a lot more than “a regular expression went bad” EU’s satellite navigation system, Galileo, suffers major outage; nears 100 hours of downtime Twitter experienced major outage yesterday due to an internal configuration issue
Read more
  • 0
  • 0
  • 1860

article-image-to-create-effective-api-documentation-know-how-developers-use-it-says-acm
Bhagyashree R
19 Jul 2019
5 min read
Save for later

To create effective API documentation, know how developers use it, says ACM

Bhagyashree R
19 Jul 2019
5 min read
Earlier this year, the Association for Computing Machinery (ACM) in its January 2019 issue of Communication Design Quarterly (CDQ), talked about how developers use API documentation when getting into a new API and also suggested a few guidelines for writing effective API documentation. Application Programming Interfaces (APIs) are standardized and documented interfaces that allow applications to communicate with each other, without having to know how they are implemented. Developers often turn to API references, tutorials, example projects, and other resources to understand how to use them in their projects. To support the learning process effectively and write optimized API documentation, this study tried to answer the following questions: Which information resources offered by the API documentation developers use and to what extent? What approaches developers take when they start working with a new API? What aspects of the content hinders efficient task completion? API documentation and content categories used in the study The study was done on 12 developers (11 male and 1 female), who were asked to solve a set of pre-defined tasks using an unfamiliar public API. To solve these tasks, they were allowed to refer to only the documentation published by the API provider. The participants used the API documentation about 49% of the time while solving the task. On an individual level, there was not much variation, with the means for all but two participants ranging between 41% and 56%. The most used content category was API reference, followed by the Recipes page. The aggregate time spent on both Recipes and Samples categories was almost equal to the time spent on the API reference category. The Concepts page, however, was used less often as compared to the API reference. Source: ACM “These findings show that the API reference is an important source of information, not only to solve specific programming issues when working with an API developers already have some experience with, but even in the initial stages of getting into a new API, in line with Meng et al. (2018),” the study concludes. How do developers learn a new API The researchers observed two different problem-solving behaviors that were very similar to the opportunistic and systematic developer personas discussed by Clarke (2007). Developers with the opportunistic approach tried to solve the problem in an “exploratory fashion”. They were more intuitive, open to making errors, and often tried solutions without double-checking in the documentation. This group was the one who does not invest much time to get a general overview of the API before starting with the first task. Developers from this group prefer fast and direct access to information instead of large sections of the documentation. On the contrary, developers with the systematic approach tried to first get a deeper understanding of the API before using it. They took some time to explore the API and prepare the development environment before starting with the first task. This group of developers attempted to follow the proposed processes and suggestions closely. They were also able to notice parts of the documentation that were not directly relevant to the given task. What aspects of API documentation make it hard for developers to complete tasks efficiently? Lack of transparent navigation and search function Some participants felt that the API documentation lacked a consistent system of navigation aids and did not offer side navigation including within-page links. Developers often required a search function when they were missing a particular piece of information, such as a term they did not know. As the documentation used in the test did not offer a search field, developers had to use a simple page search instead, which was often unsuccessful. Issues with high-level structuring of API documentation The participants observed several problems in the high-level structuring of the API documentation, that is, the split of information in Concepts, Samples, API reference, and so on. For instance, to search for a particular piece of information, participants sometimes found it difficult to decide which content category to select. It was particularly unclear how the content provided in the Samples and Recipes were distinct. Unable to reuse code examples Most of the time participants developed their solution using the sample code provided in the documentation. However, the efficient use of sample code was hindered because of the presence of placeholders in the code referencing some other code example. Few guidelines for writing efficient API documentation Organizing the content according to API functionality: The API documentation should be divided into categories that reflect the functionality or content domain of the API. So participants would have found it more convenient if instead of dividing documentation into “Samples,” “Concepts,” “API reference” and “Recipes,” the API used categories such as “Shipment Handling,” “Address Handling” and so on. Enabling efficient access to relevant content: While designing API documentation, it is important to take specific measures for improved accessibility to content that is relevant to the task at hand. This can be done by organizing the content according to API functionality, presenting conceptual information integrated with related tasks, and providing transparent navigation and powerful search function. Facilitating initial entry into the API: For this, you need to identify appropriate entry points into the API and relate particular tasks to specific API elements. Provide clean and working code examples, provide relevant background knowledge, and connect concepts to code. Supporting different development strategies: While creating the API documentation, you should also keep in mind the different strategies that developers adopt when approaching a new API. Both the content and the way it is presented should serve the needs of both opportunistic and systematic developers. These were some observations and implications from the study. To know more, read the paper: How Developers Use API Documentation: An Observation Study. GraphQL API is now generally available Best practices for RESTful web services: Naming conventions and API Versioning [Tutorial] Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times
Read more
  • 0
  • 0
  • 3584