Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-numpy-drops-python-2-support-now-you-need-python-3-5-or-later
Prasad Ramesh
17 Dec 2018
2 min read
Save for later

NumPy drops Python 2 support. Now you need Python 3.5 or later.

Prasad Ramesh
17 Dec 2018
2 min read
In a GitHub pull request last week, the NumPy community decided to remove support for Python 2.7. Python 3.4 support will also be dropped with this pull request. So now, to use NumPy 1.17 and newer versions, you will need Python 3.5 or later. NumPy has been supporting both Python versions since 2010. This move doesn't come as a surprise with the Python core team itself dropping support for Python 2 in 2020. The NumPy team had mentioned that this move comes in “Python 2 is an increasing burden on our limited resources”. The discussion to drop Python 2 support in NumPy started almost a year ago. Running pip install numpy on Python 2 will still install the last working version. But here on now, it may not contain the latest features as released for Python 3.5 or higher. However, NumPy on Python 2 will still be supported until December 31, 2019. After January 1, 2020, it may not contain the newest bug fixes. The Twitter audience sees this as a welcome move: https://twitter.com/TarasNovak/status/1073262599750459392 https://twitter.com/esc___/status/1073193736178462720 A comment on Hacker News reads: “Let's hope this move helps with the transitioning to Python 3. I'm not a Python programmer myself, but I'm tired of things getting hairy on Linux dependencies written in Python. It almost seems like I always got to have a Python 2 and a Python 3 version of some packages so my system doesn't break.” Another one reads: “I've said it before, I'll say it again. I don't care for everything-is-unicode-by-default. You can take my Python 2 when you pry it from my cold dead hands.” Some researchers who use NumPy and SciPy stick Python 2, this move from the NumPy team will help in getting everyone to work on a single version. One single supported version will sure help with the fragmentation. Often, Python developers find themselves in a situation where they have one version installed and a specific module is available/works properly in another version. Some also argue about stability, that Python 2 has greater stability and x or y feature. But the general sentiment is more supportive of adopting Python 3. Introducing numpywren, a system for linear algebra built on a serverless architecture NumPy 1.15.0 release is out! Implementing matrix operations using SciPy and NumPy  
Read more
  • 0
  • 0
  • 15134

article-image-low-js-a-node-js-port-for-embedded-systems
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

low.js, a Node.js port for embedded systems

Prasad Ramesh
17 Sep 2018
3 min read
Node.JS is a popular backend widely for web development despite some of its flaws. For embedded systems, now there is low.js, a Node.js port with far lower system requirements. In low.js you can program JavaScript applications by utilizing the full Node.js API. You can run these on regular computers and also on embedded devices, which are based on the $3 ESP32 microcontroller. The JavaScript V8 engine at the center of Node.js is replaced with Duktape. Duktape is an embeddable ECMAScript E5/E5.1 engine with a compact footprint. Some parts of the Node.js system library are rewritten for more compact footprint and use more native code. low.js currently uses under 2 MB of disk space with a minimum requirement of around 1.5 MB of RAM for the ESP32 version. low.js features low.js is good for hobbyists and people interested in electronics. It allows using Node.JS scripts on smaller devices like routers which are based on Linux or uClinux without using much of the resources. This is great for scripting especially if they communicate over the internet. The neonious one is a microcontroller board based on low.js for ESP32, which can be programmed in JavaScript ES 6 with the Node API. It includes Wifi, Ethernet, additional flash and an extra I/O controller. The lower systems requirements in low.js allow you to run it comfortably on the ESP32-WROVER module. The ESP32-WROVER costs under $3 for large orders and is a very cost effective solution for IoT devices requiring a microcontroller and Wifi. low.js for ESP32 also adds the additional benefit of fast software development and maintenance. Specialized software developers are not needed for the microcontroller software. How to install? The community edition of low.js can be run on POSIX based systems including Linux, uClinux, and Mac OS X. It is available on Github and currently ./configure is not present. You might need some programming skills and knowledge to get low.js up and running on your systems. The commands are as follows: git clone https://github.com/neonious/lowjs cd lowjs git submodule update --init --recursive make low.js for ESP32 is the same as the community edition, but adapted for the ESP32 microcontroller. This version is not open source and is pre-flashed on the neonious one. For more information and documentation visit the low.js website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust Node.js announces security updates for all their active release lines for August 2018 Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 15023

article-image-qml-net-a-new-c-library-for-cross-platform-net-gui-development
Prasad Ramesh
10 Aug 2018
3 min read
Save for later

Qml.Net: A new C# library for cross-platform .NET GUI development

Prasad Ramesh
10 Aug 2018
3 min read
Qml.Net is a C# library for cross-platform GUI development with native dependency. It exposes the required object types to host a QML engine. In Qml.NET, Qml and JavaScript together form the UI layer. It can be thought of as the view in MVC. Qml.Net features The PInvoke code in this .NET library is hand-crafted by developer Paul Knopf to ensure appropriate memory management and pointer ownership semantics. He is pretty confident about the library and mentions in his blog “I’d bet you couldn’t generate a segfault, even if you wanted to.” In Qml.Net C# objects can be registered to be treated as QML components. You can then interoperate with them as you would with regular JavaScript objects. The registered C# objects serve as a portal through which the QML world can interact with your .NET objects. This has an added benefit of keeping your business/UI concerns separate cleanly. There will also be no chatty PInvoke calls for rendering. It is a great match. A pre-compiled portable installation of Qt and the native C wrapper is available for Windows, OSX, and Linux. Developers wouldn’t have to bother with C/C++. All you need to know is QML, C#, and JavaScript; QML if fairly simple. QML can’t really be classified as a language, in the semantic sense. More appropriately it can be considered as a combination of JSON and JavaScript. Qml.Net support and working Qml.Net will work with any .NET language including popular C# and functional languages like F#. Your libraries will reference the pure .NET NuGet package, Qml.Net. The host process (Program.Main) references the native NuGet packages. This is dependent on the OS you are on: Qml.Net.WindowsBinaries Qml.Net.OSXBinaries Qml.Net.LinuxBinaries Paul currently only tests his own models that are C# objects registered with the QML engine. They are specific to each control/page. Since Microsoft's announcement of .NET Core, there hasn’t been any clear idea on cross-platform GUI development. Although Microsoft plans to support WPF in .NET Core 3.0, it will be limited to Windows machines. With community involvement and support, Qml.net can be a potential game changer. You can head to the GitHub repository and also view some hosted examples to get a better idea. Read next Exciting New Features in C# 8.0 .NET Core completes move to the new compiler – RyuJIT Microsoft Azure's new governance DApp: An enterprise blockchain without mining
Read more
  • 0
  • 0
  • 14729

article-image-unity-switches-to-webassembly-as-the-output-format-for-the-unity-webgl-build-target
Sugandha Lahoti
16 Aug 2018
2 min read
Save for later

Unity switches to WebAssembly as the output format for the Unity WebGL build target

Sugandha Lahoti
16 Aug 2018
2 min read
With the launch of Unity 2018.2 release last month, Unity is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. WebAssembly support was first teased in Unity 5.6 as an experimental feature. Unity 2018.1 marked the removal of the experimental label. And finally in 2018.2, Web Assembly replaces asm.js as the default linker target. Source: Unity Blog WebAssembly replaced asm.js because it is faster, smaller and more memory-efficient, which are all pain points of the Unity WebGL export. A WebAssembly file is a binary file (which is a more compact way to deliver code), as opposed to asm.js, which is text. In addition, code modules that have already been compiled can be stored into an IndexedDB cache, resulting in a really fast startup when reloading the same content. In WebAssembly, the code size for an empty project is ~12% smaller or ~18% if 3D physics is included. Source: Unity Blog WebAssembly also has its own instruction set. In Development builds, it adds more precise error-detection in arithmetic operations. In non-development builds, this kind of detection of arithmetic errors is masked, so the user experience is not affected. Asm.js added a restriction on the size of the Unity Heap; its size had to be specified at build-time and could never change. WebAssembly enables the Unity Heap size to grow at runtime, which lets Unity content memory-usage exceed the initial heap size. Unity is now working on multi-threading support, which will initially be released as an experimental feature and will be limited to internal native threads (no C# threads yet). Debugging hasn’t got any better. While browsers have begun to provide WebAssembly debugging in their devtools suites, these debuggers do not yet scale well to Unity3D sizes of content. What’s next to come Unity is still working on new features and optimizations to improve startup times and performance: Asynchronous instantiation Structured cloning, which allows compiled WebAssembly to be cached in the browser Baseline and tiered compilation, to speed-up instantiation Streaming instantiation to compile Assembly code while downloading it Multi-Threading You can read the full details on the Unity Blog. Unity 2018.2: Unity release for this year second time in a row! GitHub for Unity 1.0 is here with Git LFS and file locking support What you should know about Unity 2018 Interface
Read more
  • 0
  • 0
  • 14568

article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 13579

article-image-whats-new-in-wireshark-2-6
Savia Lobo
10 May 2018
2 min read
Save for later

What's new in Wireshark 2.6 ?

Savia Lobo
10 May 2018
2 min read
In less than ten months of Wireshark’s last release, the Wireshark community has now released Wireshark 2.6. Wireshark is one of the popular tools to analyze traffic over a network interface or a network stream. It is used for troubleshooting, analysis, development and education. Wireshark is based on the Gerald Combs-initiated "Ethereal" project, released under the terms of the GNU General Public License (GNU GPL). Wireshark 2.6 is released with numerous innovations, improvements and bug fixes. The highlight of Wireshark 2.6 is that, it is the last release that will support the legacy (GTK+) user interface. It will not be supported or available in Wireshark 3.0. Major improvements since 2.5, the last version, include: This version now supports HTTP Request sequences. Support for MaxMind DB files, GeoIP and GeoLite Legacy databases has been removed. Windows packages are now built using Microsoft Visual Studio 2017. The IP map feature (the “Map” button in the “Endpoints” dialog) has been removed. Some other improvements since the version 2.4 Display filter buttons can now be edited, disabled, and removed via a context menu directly from the toolbar Support for hardware-timestamping of packets has been added Application startup time has been reduced. Some keyboard shortcut mix-ups have been resolved by assigning new shortcuts to Edit → Copy methods New Protocol Support: Many protocols have been added including the following. ActiveMQ Artemis Core Protocol: This supports interceptors to intercept packets entering and exiting the server. Bluetooth Mesh Protocol : This allows (Bluetooth Low Energy) BLE devices to network together to carry data back to a gateway device, where it can be further routed to the internet. Steam In-Home Streaming discovery protocol: This allows one to use input and output on a single computer, and lets another computer actually handle the rendering, calculations, networking etc. Bug Fix: Dumpcap, a network traffic dump tool which lets one capture packet data from a live network and write the packets to a file, might not quit if Wireshark or TShark crashes. (Bug 1419) To know more about the updates in detail, read Wireshark 2.6.0 Release Notes What is Digital Forensics? Microsoft Cloud Services get GDPR Enhancements IoT Forensics: Security in an always connected world where things talk
Read more
  • 0
  • 0
  • 13577
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-gnome-3-32-released-with-fractional-scaling-improvements-to-desktop-web-and-much-more
Amrata Joshi
14 Mar 2019
3 min read
Save for later

GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more

Amrata Joshi
14 Mar 2019
3 min read
Yesterday, the team at GNOME released the latest version of GNOME 3, GNOME 3.32, a free open-source desktop environment for Unix-like operating systems. This release comes with improvements to desktop, web and much more. What’s new in GNOME 3.32? Fractional Scaling Fractional scaling is available as an experimental option that includes several fractional values with good visual quality on any given monitor. This feature is a major enhancement for the GNOME desktop. It requires manually adding scale-monitor-framebuffer to the settings keyorg.gnome.mutter.experimental-features. Improved data structures in GNOME desktop This release comes with improvements to foundation data structures in the GNOME Desktop for faster and snappier feel to the animations, icons and top shell panel. The search database has been improved which helps in searching faster. Even the on-screen keyboard has been improved, it now supports an emoji chooser. New automation mode in the GNOME Web GNOME Web now comes with a new automation mode which allows the application to be controlled by WebDriver. The reader mode has been enhanced now that features a set of customizable preferences and an improved style. With this release, the touchpad users can now take advantage of more gestures while browsing. For example, swipe left or right to go back or forward through browsing history. New settings for permissions Settings come with a new “Application Permissions” panel that shows resources and permissions for various applications, including installed Flatpak applications. Users can now grant permissions to certain resources when requested by the application. The Sound settings have been enhanced for supporting a vertical layout and an intuitive placement of options. With this release, the night light color temperature can now be adjusted for a warmer or cooler setting. GNOME Boxes GNOME Boxes tries to enable 3D acceleration for virtual machines if both the guest and host support it. This leads to better performance of graphics-intensive guest applications such as games and video editors. Application Management from multiple sources This release can handle apps available from multiple sources, such as Flatpak and distribution repositories. With this release, Flatpak app entries now can list the permissions required on the details page. This will give users a comprehensive understanding of what data the software will need access to. Even browsing application details will get faster now with the new XML parsing library used in this release. To know more about this release, check out the official announcement. GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32 GNOME 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes  
Read more
  • 0
  • 0
  • 13395

article-image-gcc-9-will-come-with-improved-diagnostics-simpler-c-errors-and-much-more
Amrata Joshi
11 Mar 2019
2 min read
Save for later

GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more

Amrata Joshi
11 Mar 2019
2 min read
Just two months ago, the team behind GCC (GNU Compiler Collection) made certain changes to GCC 9.1. And Last week, the team released GCC 9.1 with improved diagnostics, location and simpler C++ errors.  What’s new in GCC 9.1? Changes to diagnostics The team added a left-hand margin that shows line numbers. GCC 9.1 now has a new look for the diagnostics. The diagnostics can label regions of the source code in order to show relevant information. The diagnostics come with left-hand and right-hand sides of the “+” operator, so GCC highlights them inline. The team has added a JSON output format such that GCC 9.1 now has a machine-readable output format for diagnostics. C++ errors  The compiler usually has to consider several functions while dealing with C++ at a given call site and reject all of them for different reasons. Also, the g++‘s error messages need to be handled and a specific reason needs to be given for rejecting each function. This makes simple cases difficult to read. This release comes with a  special-casing to simplify g++ errors for common cases. Improved C++ syntax in GCC 9.1 The major issue within GCC’s internal representation is that not every node within the syntax tree has a source location. For GCC 9.1, the team has worked to solve this problem so that most of the places in the C++ syntax tree now retain location information for longer. Users can now emit optimization information GCC 9.1 can now automatically vectorize loops and reorganize them to work on multiple iterations at once. Users will now have an option, -fopt-info, that will help in emitting optimization information. Improved runtime library in GCC 9.1 This release comes with improved experimental support for C++17, including <memory_resource>. There will also be a support for opening file streams with wide character paths on Windows. Arm specific This release comes with support for the deprecated Armv2 and Armv3 architectures and their variants have been removed. Support for the Armv5 and Armv5E architectures has also been removed. To know more about this news, check out RedHat’s blog post. DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more The D language front-end support finally merged into GCC 9 GCC 8.1 Standards released!
Read more
  • 0
  • 0
  • 13140

article-image-steamvr-introduces-new-controllers-for-game-developers-the-steamvr-input-system
Sugandha Lahoti
16 May 2018
2 min read
Save for later

SteamVR introduces new controllers for game developers, the SteamVR Input system

Sugandha Lahoti
16 May 2018
2 min read
SteamVR announced new controllers adding accessibility features to the Virtual reality ecosystem. The SteamVR input system, lets you build controller bindings for any game, “even for controllers that didn’t exist when the game was written”, says Valve’s Joe Ludwig in a Steam forum post. What this essentially means is that any past, present or future game can hypothetically add support for any SteamVR compatible controller. Source: Steam community Supported controllers include the XBox One gamepad, Vive Tracker, Oculus Touch, and motion controllers for HTC Vive and Windows Mixed Reality VR headsets. The key-binding system of the SteamVR input system allows users to build binding configurations. Users can adapt the controls of games to take into account user behavior such as left-handedness, a disability, or personal preference. These configurations can also be shared easily with other users of the same game via the Steam Workshop. For developers, the new SteamVR input system means easier adaptation of games to diverse controllers. Developers entirely control the default bindings for each controller type. They can also offer alternate control schemes directly without the need to change the games themselves. SteamVR Input works with every SteamVR application; it doesn’t require developers to update their app to support it. Hardware designers are also free to try more types of input, apart from Vive Tracker, Oculus Touch etc. They can expose whatever input controls exist on their device and then describe that device to the system. Most importantly, the entire mechanism is captured in an easy to use UI that is available in-headset under the Settings menu. Source: Steam community For now, SteamVR Input is in beta. Details for developers are available on the OpenVR SDK 1.0.15 page. You can also see the documentation to enable native support in your applications. Hardware developers can read the driver API documentation to see how they can enable this new system for their devices. Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand-alone VR headset arrives! Google Daydream powered Lenovo Mirage solo hits the market
Read more
  • 0
  • 0
  • 13101

article-image-github-now-allows-issue-transfer-between-repositories-a-public-beta-version
Savia Lobo
01 Nov 2018
3 min read
Save for later

GitHub now allows issue transfer between repositories; a public beta version

Savia Lobo
01 Nov 2018
3 min read
Yesterday, GitHub announced that repository admins can now transfer issues from one repository to another better fitting repository, to help those issues find their home. This project by GitHub is currently is in public beta version. Nat Friedman, CEO of GitHub, in his tweet said, “We've just shipped the ability to transfer an issue from one repo to another. This is one of the most-requested GitHub features. Feels good!” When the user transfers an issue, the comments, assignees, and issue timeline events are retained. The issue's labels, projects, and milestones are not retained, although users can see past activity in the issue's timeline. People or teams who are mentioned in the issue will receive a notification letting them know that the issue has been transferred to a new repository. The original URL redirects to the new issue's URL. People who don't have read permissions in the new repository will see a banner letting them know that the issue has been transferred to a new repository that they can't access. Permission levels for issue transfer between repositories People with an owner or team maintainer roles can manage repository access with teams. Each team can have different repository access permissions. There are three types of repository permissions, i.e. Read, Write, and Admin, available for people or teams collaborating on repositories that belong to an organization. To transfer an open issue to another repository, the user needs to have admin permissions on the repository the issue is in and the repository where the issue is to be transferred. If the issue is being transferred from a repository that's owned by an organization, you are a member of, you must transfer it to another repository within your organization. To know more about the repository permission levels visit GitHubHelp blog post. Steps to transfer an Open issue to another repository On GitHub, navigate to the main page of the repository. Under your repository name, click  Issues. In the list of issues, click the issue you'd like to transfer. In the right sidebar, click Transfer this issue. 5. In "Choose a repository," select the repository you want to transfer the issue to. 6. Click Transfer issue. GitHub Business Cloud is now FedRAMP authorized GitHub updates developers and policymakers on EU copyright Directive at Brussels GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage    
Read more
  • 0
  • 0
  • 12927
article-image-git-bug-a-new-distributed-bug-tracker-embedded-in-git
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

Git-bug: A new distributed bug tracker embedded in git

Melisha Dsouza
20 Aug 2018
3 min read
git-bug is a distributed bug tracker that is embedded in git. Using git's internal storage ensures that no files are added in your project. You can push your bugs to the same git remote that you are already using to collaborate with other people. The main idea behind implementing a distributed bug tracker in Git was to stop relying on a web service somewhere to deal with bugs. Browsing and editing bug reports offline wouldn’t be much of a pain, thanks to this implementation. While git-bug addresses a pressing need, note that the project is not yet available for full fledged use and is currently a proof of concept released just 3 days ago at version 0.2.0. Reddit is abuzz with views on the release. A user quotes- Source: reddit.com Certain users also had counter thoughts on the cons of the release - Source: reddit.com   Now that you want to get your hands on git-bug, let’s look at how to get started. Installing git-bug, Linux packages needed and CLI usage for its implementation To install the git-bug, all you need to do is execute the following command- go get github.com/MichaelMure/git-bug If it's not done already, add golang binary directory in your PATH: export PATH=$PATH:$GOROOT/bin:$GOPATH/bin You can set pre-compiled binaries by following 3 simple steps: Head over to the release page and download the appropriate binary for your system. Copy the binary anywhere in your PATH Rename the binary to git-bug (or git-bug.exe on windows) The only linux packge needed for this release is the Archlinux (AUR) Further, you can use the CLI to implement the git-bug using the following commands- Create a new bug: git bug new Your favorite editor will open to write a title and a message. You can push your new entry to a remote: git bug push [<remote>] And pull for updates: git bug pull [<remote>] List existing bugs: git bug ls   Use commands like show, comment, open or close to display and modify bugs. For more details about each command, you can run git bug <command> --help or scan the command's documentation. Features of the git-bug #1 Interactive User Interface for the terminal Use the git bug termui  command to browse and edit bugs. This short video will demonstrate how easy and interactive it is to browse and edit bugs #2 Launch a rich Web UI Take a look at the awesome web UI that is obtained with git bug webui. Source: github.com     Source: github.com   This web UI is entirely packed inside the same go binary and serve static content through a localhost http server. It connects to  backend through a GraphQL API. Take a look at the schema for more clarity. The additional features that are planned include media embedding import/export of github issue extendable data model to support arbitrary bug tracker inflatable raptor Every new release is expected to come with exciting new features, it is also coupled with a few minor constraints. You can check out some of the minor inconveniences as listed out on the github page. We can’t wait for the release to be in a fully working condition. But before that, if you need any additional information on how the git-bug works, head over to the github page. Snapchat source code leaked and posted to GitHub GitHub open sources its GitHub Load Balancer (GLB) Director Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?
Read more
  • 0
  • 0
  • 12901

article-image-virtual-reality-solar-system-unity-google-cardboard
Sugandha Lahoti
25 Apr 2018
21 min read
Save for later

Build a Virtual Reality Solar System in Unity for Google Cardboard

Sugandha Lahoti
25 Apr 2018
21 min read
In today's tutorial, we will feature visualization of a newly discovered solar system. We will leverage the virtual reality development process for this project in order to illustrate the power of VR and ease of use of the Unity 3D engine. This project is dioramic scene, where the user floats in space, observing the movement of planets within the TRAPPIST-1 planetary system. In February 2017, astronomers announced the discovery of seven planets orbiting an ultra-cool dwarf star slightly larger than Jupiter. We will use this information to build a virtual environment to run on Google Cardboard (Android and iOS) or other compatible devices: We will additionally cover the following topics: Platform setup: Download and install the platform-specific software needed to build an application on your target device. Experienced mobile developers with the latest Android or iOS SDK may skip this step. Google Cardboard setup: This package of development tools facilitates display and interaction on a Cardboard device. Unity environment setup: Initializing Unity's Project Settings in preparation for a VR environment. Building the TRAPPIST-1 system: Design and implement the Solar System project. Build for your device: Build and install the project onto a mobile device for viewing in Google Cardboard. Platform setup Before we begin building the solar system, we must setup our computer environment to build the runtime application for a given VR device. If you have never built a Unity application for Android or iOS, you will need to download and install the Software Development Kit (SDK) for your chosen platform. An SDK is a set of tools that will let you build an application for a specific software package, hardware platform, game console, or operating system. Installing the SDK may require additional tools or specific files to complete the process, and the requirements change from year to year, as operating systems and hardware platforms undergo updates and revisions. To deal with this nightmare, Unity maintains an impressive set of platform-specific instructions to ease the setup process. Their list contains detailed instructions for the following platforms: Apple Mac Apple TV Android iOS Samsung TV Standalone Tizen Web Player WebGL Windows For this project, we will be building for the most common mobile devices: Android or iOS. The first step is to visit either of the following links to prepare your computer: Android: Android users will need the Android Developer Studio, Java Virtual Machine (JVM), and assorted drivers. Follow this link for installation instructions and files: https://docs.unity3d.com/Manual/Android-sdksetup.html. Apple iOS: iOS builds are created on a Mac and require an Apple Developer account, and the latest version of Xcode development tools. However, if you've previously built an iOS app, these conditions will have already been met by your system. For the complete instructions, follow this link: https://docs.unity3d.com/Manual/iphone-GettingStarted.html. Google Cardboard setup Like the Unity documentation website, Google also maintains an in-depth guide for the Google VR SDK for Unity set of tools and examples. This SDK provides the following features on the device: User head tracking Side-by-side stereo rendering Detection of user interactions (via trigger or controller) Automatic stereo configuration for a specific VR viewer Distortion correction Automatic gyro drift correction These features are all contained in one easy-to-use package that will be imported into our Unity scene. Download the SDK from the following link, before moving on to the next step: http://developers.google.com/cardboard/unity/download. At the time of writing, the current version of the Google VR SDK for Unity is version 1.110.1 and it is available via a GitHub repository. The previous link should take you to the latest version of the SDK. However, when starting a new project, be sure to compare the SDK version requirements with your installed version of Unity. Setting up the Unity environment Like all projects, we will begin by launching Unity and creating a new project. The first steps will create a project folder which contains several files and directories: Launch the Unity application. Choose the New option after the application splash screen loads. Create a new project by launching the Unity application. Save the project as Trappist1 in a location of your choice, as demonstrated in Figure 2.2: To prepare for VR, we will adjust the Build Settings and Player Settings windows. Open Build Settings from File | Build Settings. Select the Platform for your target device (iOS or Android). Click the Switch Platform button to confirm the change. The Unity icon in the right-hand column of the platform panel indicates the currently selected build platform. By default, it will appear next to the Standalone option. After switching, the icon should now be on Android or iOS platform, as shown in Figure 2.3: Note for Android developers: Ericsson Texture Compression (ETC) is the standard texture compression format on Android. Unity defaults to ETC (default), which is supported on all current Android devices, but it does not support textures that have an alpha channel. ETC2 supports alpha channels and provides improved quality for RBG textures on Android devices that support OpenGL ES 3.0. Since we will not need alpha channels, we will stick with ETC (default) for this project: Open the Player Settings by clicking the button at the bottom of the window. The PlayerSetting panel will open in the Inspector panel. Scroll down to Other Settings (Unity 5.5 thru 2017.1) or XR Settings and check the Virtual Reality Supported checkbox. A list of choices will appear for selecting VR SDKs. Add Cardboard support to the list, as shown in Figure 2.4: You will also need to create a valid Bundle Identifier or Package Name under Identification section of Other Settings. The value should follow the reverse-DNS format of the com.yourCompanyName.ProjectName format using alphanumeric characters, periods, and hyphens. The default value must be changed in order to build your application. Android development note: Bundle Identifiers are unique. When an app is built and released for Android, the Bundle Identifier becomes the app's package name and cannot be changed. This restriction and other requirements are discussed in this Android documentation link: http://developer.Android.com/reference/Android/content/pm/PackageInfo.html. Apple development note: Once you have registered a Bundle Identifier to a Personal Team in Xcode, the same Bundle Identifier cannot be registered to another Apple Developer Program team in the future. This means that, while testing your game using a free Apple ID and a Personal Team, you should choose a Bundle Identifier that is for testing only, you will not be able to use the same Bundle Identifier to release the game. An easy way to do this is to add Test to the end of whatever Bundle Identifier you were going to use, for example, com.MyCompany.VRTrappistTest. When you release an app, its Bundle Identifier must be unique to your app, and cannot be changed after your app has been submitted to the App Store. Set the Minimum API Level to Android Nougat (API level 24) and leave the Target API on Automatic. Close the Build Settings window and save the project before continuing. Choose Assets | Import Package | Custom Package... to import the GoogleVRForUnity.unitypackage previously downloaded from http://developers.google.com/cardboard/unity/download. The package will begin decompressing the scripts, assets, and plugins needed to build a Cardboard product. When completed, confirm that all options are selected and choose Import. Once the package has been installed, a new menu titled GoogleVR will be available in the main menu. This provides easy access to the GoogleVR documentation and Editor Settings. Additionally, a directory titled GoogleVR will appear in the Project panel: Right-click in the Project and choose Create | Folder to add the following directories: Materials, Scenes, and Scripts. Choose File | Save Scenes to save the default scene. I'm using the very original Main Scene and saving it to the Scenes folder created in the previous step. Choose File | Save Project from the main menu to complete the setup portion of this project. Building the TRAPPIST-1 System Now that we have Unity configured to build for our device, we can begin building our space themes VR environment. We have designed this project to focus on building and deploying a VR experience. If you are moderately familiar with Unity, this project will be very simple. Again, this is by design. However, if you are relatively new, then the basic 3D primitives, a few textures, and a simple orbiting script will be a great way to expand your understanding of the development platform: Create a new script by selecting Assets | Create | C# Script from the main menu. By default, the script will be titled NewBehaviourScript. Single click this item in the Project window and rename it OrbitController. Finally, we will keep the project organized by dragging OrbitController's icon to the Scripts folder. Double-click the OrbitController script item to edit it. Doing this will open a script editor as a separate application and load the OrbitController script for editing. The following code block illustrates the default script text: using System.Collections; using System.Collections.Generic; using UnityEngine; public class OrbitController : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } This script will be used to determine each planet's location, orientation, and relative velocity within the system. The specific dimensions will be added later, but we will start by adding some public variables. Starting on line 7, add the following five statements: public Transform orbitPivot; public float orbitSpeed; public float rotationSpeed; public float planetRadius; public float distFromStar; Since we will be referring to these variables in the near future, we need a better understanding of how they will be used: orbitPivot stores the position of the object that each planet will revolve around (in this case, it is the star TRAPPIST-1). orbitalSpeed is used to control how fast each planet revolves around the central star. rotationSpeed is how fast an object rotates around its own axis. planetRadius represents a planet's radius compared to Earth. This value will be used to set the planet's size in our environment. distFromStar is a planet's distance in Astronomical Units (AU) from the central star. Continue by adding the following lines of code to the Start() method of the OrbitController script: // Use this for initialization void Start () { // Creates a random position along the orbit path Vector2 randomPosition = Random.insideUnitCircle; transform.position = new Vector3 (randomPosition.x, 0f, randomPosition.y) * distFromStar; // Sets the size of the GameObject to the Planet radius value transform.localScale = Vector3.one * planetRadius; } As shown within this script, the Start() method is used to set the initial position of each planet. We will add the dimensions when we create the planets, and this script will pull those values to set the starting point of each game object at runtime: Next, modify the Update() method by adding two additional lines of code, as indicated in the following code block: // Update is called once per frame. This code block updates the Planet's position during each // runtime frame. void Update () { this.transform.RotateAround (orbitPivot.position, Vector3.up, orbitSpeed * Time.deltaTime); this.transform.Rotate (Vector3.up, rotationSpeed * Time.deltaTime); } This method is called once per frame while the program is running. Within Update(), the location for each object is determined by computing where the object should be during the next frame. this.transform.RotateAround uses the sun's pivot point to determine where the current GameObject (identified in the script by this) should appear in this frame. Then this.transform.Rotate updates how much the planet has rotated since the last frame. Save the script and return to Unity. Now that we have our first script, we can begin building the star and its planets. For this process, we will use Unity's primitive 3D GameObject to create the celestial bodies: Create a new sphere using GameObject | 3D Object | Sphere. This object will represent the star TRAPPIST-1. It will reside in the center of our solar system and will serve as the pivot for all seven planets. Right-click on the newly created Sphere object in the Hierarchy window and select Rename. Rename the object Star. Using the Inspector tab, set the object to Position: 0,0,0 and Scale: 1,1,1. With the Star selected, locate the Add Component button in the Inspector panel. Click the button and enter orbitcontroller in the search box. Double-click on the OrbitController script icon when it appears. The script is now a component of the star. Create another sphere using GameObject | 3D Object | Sphere and position it anywhere in the scene, with the default scale of 1,1,1. Rename the object Planet b. Figure 2.5, from the TRAPPIST-1 Wikipedia page, shows the relative orbital period, distance from the star, radius, and mass of each planet. We will use these dimensions and names to complete the setup of our VR environment. Each value will be entered as public variables for their associated GameObjects: Apply the OrbitController script to the Planet b asset by dragging the script icon to the planet in the Scene window or the Planet b object in the Hierarchy window. Planet b is our first planet and it will serve as a prototype for the rest of the system. Set the Orbit Pivot point of Planet b in the Inspector. Do this by clicking the Selector Target next to the Orbit Pivot field (see Figure 2.6). Then, select Star from the list of objects. The field value will change from None (Transform) to Star (Transform). Our script will use the origin point of the select GameObject as its pivot point. Go back and select the Star GameObject and set the Orbit Pivot to Star as we did with Planet b. Save the scene: Now that our template planet has the OrbitController script, we can create the remaining planets: Duplicate the Planet b GameObject six times, by right-clicking on it and choosing Duplicate. Rename each copy Planet c through Planet h. Set the public variables for each GameObject, using the following chart: GameObject Orbit Speed Rotation Speed Planet Radius Dist From Star Star 0 2 6 0 Planet b .151 5 0.85 11 Planet c .242 5 1.38 15 Planet d .405 5 0.41 21 Planet e .61 5 0.62 28 Planet f .921 5 0.68 37 Planet g 1.235 5 1.34 45 Planet h 1.80 5 0.76 60 Table 2.1: TRAPPIST-1 gameobject Transform settings Create an empty GameObject by right clicking in the Hierarchy panel and selecting Create Empty. This item will help keep the Hierarchy window organized. Rename the item Planets and drag Planet b—through Planet h into the empty item. This completes the layout of our solar system, and we can now focus on setting a location for the stationary player. Our player will not have the luxury of motion, so we must determine the optimal point of view for the scene: Run the simulation. Figure 2.7 illustrates the layout being used to build and edit the scene. With the scene running and the Main Camera selected, use the Move and Rotate tools or the Transform fields to readjust the position of the camera in the Scene window or to find a position with a wide view of the action in the Game window; or a position with an interesting vantage point. Do not stop the simulation when you identify a position. Stopping the simulation will reset the Transform fields back to their original values. Click the small Options gear in the Transform panel and select Copy Component. This will store a copy of the Transform settings to the clipboard: Stop the simulation. You will notice that the Main Camera position and rotation have reverted to their original settings. Click the Transform gear again and select Paste Component Values to set the Transform fields to the desired values. Save the scene and project. You might have noticed that we cannot really tell how fast the planets are rotating. This is because the planets are simple spheres without details. This can be fixed by adding materials to each planet. Since we really do not know what these planets look like we will take a creative approach and go for aesthetics over scientific accuracy. The internet is a great source for the images we need. A simple Google search for planetary textures will result in thousands of options. Use a collection of these images to create materials for the planets and the TRAPPIST-1 star: Open a web browser and search Google for planet textures. You will need one texture for each planet and one more for the star. Download the textures to your computer and rename them something memorable (that is, planet_b_mat...). Alternatively, you can download a complete set of textures from the Resources section of the supporting website: http://zephyr9.pairsite.com/vrblueprints/Trappist1/. Copy the images to the Trappist1/Assets/Materials folder. Switch back to Unity and open the Materials folder in the Project panel. Drag each texture to its corresponding GameObject in the Hierarchy panel. Notice that each time you do this Unity creates a new material and assigns it to the planet GameObject: Run the simulation again and observe the movement of the planets. Adjust the individual planet Orbit Speed and Rotation Speed to feel natural. Take a bit of creative license here, leaning more on the scene's aesthetic quality than on scientific accuracy. Save the scene and the project. For the final design phase, we will add a space themed background using a Skybox. Skyboxes are rendered components that create the backdrop for Unity scenes. They illustrate the world beyond the 3D geometry, creating an atmosphere to match the setting. Skyboxes can be constructed of solids, gradients, or images using a variety of graphic programs and applications. For this project, we will find a suitable component in the Asset Store: Load the Asset Store from the Window menu. Search for a free space-themed skybox using the phrase space skybox price:0. Select a package and use the Download button to import the package into the Scene. Select Window | Lighting | Settings from the main menu. In the Scene section, click on the Selector Target for the Skybox Material and choose the newly downloaded skybox: Save the scene and the project. With that last step complete, we are done with the design and development phase of the project. Next, we will move on to building the application and transferring it to a device. Building the application To experience this simulation in VR, we need to have our scene run on a head-mounted display as a stereoscopic display. The app needs to compile the proper viewing parameters, capture and process head tracking data, and correct for visual distortion. When you consider the number of VR devices we would have to account for, the task is nothing short of daunting. Luckily, Google VR facilitates all of this in one easy-to-use plugin. The process for building the mobile application will depend on the mobile platform you are targeting. If you have previously built and installed a Unity app on a mobile device, many of these steps will have already been completed, and a few will apply updates to your existing software. Note: Unity is a fantastic software platform with a rich community and an attentive development staff. During the writing of this book, we tackled software updates (5.5 through 2017.3) and various changes in the VR development process. Although we are including the simplified building steps, it is important to check Google's VR documentation for the latest software updates and detailed instructions: Android: https://developers.google.com/vr/unity/get-started iOS: https://developers.google.com/vr/unity/get-started-ios Android Instructions If you are just starting out building applications from Unity, we suggest starting out with the Android process. The workflow for getting your project export from Unity to playing on your device is short and straight forward: On your Android device, navigate to Settings | About phone or Settings | About Device | Software Info. Scroll down to Build number and tap the item seven times. A popup will appear, confirming that you are now a developer. Now navigate to Settings | Developer options | Debugging and enable USB debugging. Building an Android application In your project directory (at the same level as the Asset folder), create a Build folder. Connect your Android device to the computer using a USB cable. You may see a prompt asking you to confirm that you wish to enable USB debugging on the device. If so, click OK. In Unity, select File | Build Settings to load the Build dialog. Confirm that the Platform is set to Android. If not choose Android and click Switch Platform. Note that Scenes/Main Scene should be loaded and checked in the Scenes In Build portion of the dialog. If not, click the Add Open Scenes button to add Main Scene to the list of scenes to be included in the build. Click the Build button. This will create an Android executable application with the .apk file extension. Invalid command Android error Some Android users have reported an error relating to the Android SDK Tools location. The problem has been confirmed in many installations prior to Unity 2017.1. If this problem occurs, the best solution is to downgrade to a previous version of the SDK Tools. This can be done by following the steps outlined here: Locate and delete the Android SDK Tools folder [Your Android SDK Root]/tools. This location will depend on where the Android SDK package was installed. For example, on my computer the Android SDK Tools folder is found at C:UserscpalmerAppDataLocalAndroidsdk. Download SDK Tools from http://dl-ssl.google.com/Android/repository/tools_r25.2.5-windows.zip. Extract the archive to the SDK root directory. Re-attempt the Build project process. If this is the first time you are creating an Android application, you might get an error indicating that Unity cannot locate your Android SDK root directory. If this is the case, follow these steps: Cancel the build process and close the Build Settings... window. Choose Edit | Preferences... from the main menu. Choose External Tools and scroll down to Android. Enter the location of your Android SDK root folder. If you have not installed the SDK, click the download button and follow the installation process. Install the app onto your phone and load the phone into your Cardboard device: iOS Instructions The process for building an iOS app is much more involved than the Android process. There are two different types of builds: Build for testing Build for distribution (which requires an Apple Developer License) In either case, you will need the following items to build a modern iOS app: A Mac computer running OS X 10.11 or later The latest version of Xcode An iOS device and USB cable An Apple ID Your Unity project For this demo, we will build an app for testing and we will assume you have completed the Getting Started steps (https://docs.unity3d.com/Manual/iphone-GettingStarted.html) from Section 1. If you do not yet have an Apple ID, obtain one from the Apple ID site (http://appleid.apple.com/). Once you have obtained an Apple ID, it must be added to Xcode: Open Xcode. From the menu bar at the top of the screen, choose Xcode | Preferences. This will open the Preferences window. Choose Accounts at the top of the window to display information about the Apple IDs that have been added to Xcode. To add an Apple ID, click the plus sign at the bottom left corner and choose Add Apple ID. Enter your Apple ID and password in the resulting popup box. Your Apple ID will then appear in the list. Select your Apple ID. Apple Developer Program teams are listed under the heading of Team. If you are using the free Apple ID, you will be assigned to Personal Team. Otherwise, you will be shown the teams you are enrolled in through the Apple Developer Program. Preparing your Unity project for iOS Within Unity, open the Build Settings from the top menu (File | Build Settings). Confirm that the Platform is set to iOS. If not choose iOS and click Switch Platform at the bottom of the window. Select the Build & Run button. Building an iOS application Xcode will launch with your Unity project. Select your platform and follow the standard process for building an application from Xcode. Install the app onto your phone and load the phone into your Cardboard device. We looked at the basic Unity workflow for developing VR experiences. We also provided a stationary solution so that we could focus on the development process. The Cardboard platform provides access to VR content from a mobile platform, but it also allows for touch and gaze controls. You read an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create compelling Virtual Reality experiences for mobile and desktop with three top platforms—Cardboard VR, Gear VR, and OculusVR. Read More Top 7 modern Virtual Reality hardware systems Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive    
Read more
  • 0
  • 2
  • 12888

article-image-home-assistant-an-open-source-python-home-automation-hub-to-rule-all-things-smart
Prasad Ramesh
25 Aug 2018
2 min read
Save for later

Home Assistant: an open source Python home automation hub to rule all things smart

Prasad Ramesh
25 Aug 2018
2 min read
We have Amazon Alexa, Google Home and Phillips Hue for smart actions in your home. But they are individual and require different controls. What if all of your smart devices can work together with a master hub? That is Home Assistant. Home assistant is an automation platform that can run on Raspberry Pi. It acts as a central hub for connecting and automating all your smart devices. It supports services like IFTTT, Pushbullet, Google cast, and many others. Currently there are over a thousand components supported. It tracks the state of all the installed smart devices in your home. All the devices can be controlled from a single, mobile-friendly, interface. For security and privacy, all operations via Home Assistant are done locally, meaning no data is stored on the cloud. The Home assistant website advertises functions like having lights turn on upon sunset, dimming lights when you watch a movie on Chromecast. There is a virtual image called Hass.io which is an all in one solution and get started with Home Assistant. There is a guide is to install Hass.io on a Raspberry Pi. The requirements for running Home Assistant are: Raspberry Pi 3 Model B+ + Power Supply (at least 2.5A) A Class 10 or higher, Size 32 GB or bigger Micro SD card An SD Card reader Ethernet cable (optional, Hass.io can work with WiFi) For unattended configuration, optionally a USB-Stick Home assistant is a hub, it cannot control anything on its own. Think of it as a hub that passes instructions, a master device that communicates with other devices for home automation. Home assistant can’t do anything if there are no smart devices to work with. Since it is open source, there are dozens of contributions from tinkerers and DIY enthusiasts worldwide. You can check out the automation examples to know more and use them. The installation is very simple and there is a friendly UI to control your automation tasks. There is plenty of information at the Home Assistant website to get your started. They also have a GitHub repository. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Apple joins the Thread Group, signalling its Smart Home ambitions with HomeKit, Siri and other IoT products Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 1
  • 12878
article-image-spotify-has-one-of-the-most-intricate-uses-of-javascript-in-the-world-says-former-engineer
Richard Gall
19 Jul 2018
3 min read
Save for later

Spotify has "one of the most intricate uses of JavaScript in the world," says former engineer

Richard Gall
19 Jul 2018
3 min read
A former Spotify engineer, Mattias Peter Johansson, has outlined how the music streaming platform uses JavaScript on it's desktop application. It's complicated and, according to Reddit, "kind of insane". Responding to a question on Quora, Johansson says it could be "among the top 25 most intricate uses of JavaScript in the world." What's particularly interesting is how this intricate JavaScript has influenced the Spotify architecture and the way the development teams are organized. How JavaScript is used on the Spotify desktop app JavaScript is used across the Spotify desktop client. Wherever UI is concerned, it uses JavaScript. C++ is used for functionality beneath the UI, with JavaScript sitting on top of it. The languages are connected by an interface aptly called a 'bridge.' Spotify's squads and spotlets The Spotify team is made up of small squads of anywhere from 3 to 12 people. Johansson explains that  "a feature is generally owned by a single squad, and during normal conditions the squad has all it needs to develop and maintain its feature." Each team has as many backend, front end, and mobile developers as necessary for the particular feature it owns. These features are known as 'spotlets.' Each of these spotlets are essentially web apps that come together to power the desktop app's UI. Johansson explains how they work, saying: They all run inside Chromium Embedded Framework, each app living within their own little iframe, which gives squads the ability to work with whatever frameworks they need, without the need to coordinate tooling and dependencies with other squads. The advantage of this is that it makes technical decision making much easier. As Johansson explains, "introducing a library is a discussion between a few people instead of decision that involves ~100 people and their various needs." Shared functionalities across the Spotify development team Although spotlets and squads create a somewhat fragmented picture of a development team, things are unified. "The latest versions of all Spotlets are zipped and bundled with the desktop client binary on every release, assets and all," says Johansson. Individual spotlets are also sometimes released where an emergency fix might be needed. Although tooling decisions are left up to individual squads, there are a couple of tools that are used across the team. This includes GLUE, a CSS framework that allows some coordination and alignment in terms of design. The team also rely heavily on npm, as you might expect. "We have our own internal npm repository where we publish internal modules, and we package the code together using a Browserify-like tool."
Read more
  • 0
  • 1
  • 12823

article-image-sherin-thomas-explains-how-to-build-a-pipeline-in-pytorch-for-deep-learning-workflows
Packt Editorial Staff
09 May 2019
8 min read
Save for later

Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows

Packt Editorial Staff
09 May 2019
8 min read
A typical deep learning workflow starts with ideation and research around a problem statement, where the architectural design and model decisions come into play. Following this, the theoretical model is experimented using prototypes. This includes trying out different models or techniques, such as skip connection, or making decisions on what not to try out. PyTorch was started as a research framework by a Facebook intern, and now it has grown to be used as a research or prototype framework and to write an efficient model with serving modules. The PyTorch deep learning workflow is fairly equivalent to the workflow implemented by almost everyone in the industry, even for highly sophisticated implementations, with slight variations. In this article, we explain the core of ideation and planning, design and experimentation of the PyTorch deep learning workflow. This article is an excerpt from the book PyTorch Deep Learning Hands-On by Sherin Thomas and Sudhanshi Passi. This book attempts to provide an entirely practical introduction to PyTorch. This PyTorch publication has numerous examples and dynamic AI applications and demonstrates the simplicity and efficiency of the PyTorch approach to machine intelligence and deep learning. Ideation and planning Usually, in an organization, the product team comes up with a problem statement for the engineering team, to know whether they can solve it or not. This is the start of the ideation phase. However, in academia, this could be the decision phase where candidates have to find a problem for their thesis. In the ideation phase, engineers brainstorm and find the theoretical implementations that could potentially solve the problem. In addition to converting the problem statement to a theoretical solution, the ideation phase is where we decide what the data types are and what dataset we should use to build the proof of concept (POC) of the minimum viable product (MVP). Also, this is the stage where the team decides which framework to go with by analyzing the behavior of the problem statement, available implementations, available pretrained models, and so on. This stage is very common in the industry, and I have come across numerous examples where a well-planned ideation phase helped the team to roll out a reliable product on time, while a non-planned ideation phase destroyed the whole product creation. Design and experimentation The crucial part of design and experimentation lies in the dataset and the preprocessing of the dataset. For any data science project, the major timeshare is spent on data cleaning and preprocessing. Deep learning is no exception from this. Data preprocessing is one of the vital parts of building a deep learning pipeline. Usually, for a neural network to process, real-world datasets are not cleaned or formatted. Conversion to floats or integers, normalization and so on, is required before further processing. Building a data processing pipeline is also a non-trivial task, which consists of writing a lot of boilerplate code. For making it much easier, dataset builders and DataLoader pipeline packages are built into the core of PyTorch. The dataset and DataLoader classes Different types of deep learning problems require different types of datasets, and each of them might require different types of preprocessing depending on the neural network architecture we use. This is one of the core problems in deep learning pipeline building. Although the community has made the datasets for different tasks available for free, writing a preprocessing script is almost always painful. PyTorch solves this problem by giving abstract classes to write custom datasets and data loaders. The example given here is a simple dataset class to load the fizzbuzz dataset, but extending this to handle any type of dataset is fairly straightforward. PyTorch's official documentation uses a similar approach to preprocess an image dataset before passing that to a complex convolutional neural network (CNN) architecture. A dataset class in PyTorch is a high-level abstraction that handles almost everything required by the data loaders. The custom dataset class defined by the user needs to override the __len__ and __getitem__ functions of the parent class, where __len__ is being used by the data loaders to determine the length of the dataset and __getitem__ is being used by the data loaders to get the item. The __getitem__ function expects the user to pass the index as an argument and get the item that resides on that index: from dataclasses import dataclassfrom torch.utils.data import Dataset, DataLoader@dataclass(eq=False)class FizBuzDataset(Dataset):    input_size: int    start: int = 0    end: int = 1000    def encoder(self,num):        ret = [int(i) for i in '{0:b}'.format(num)]        return[0] * (self.input_size - len(ret)) + ret    def __getitem__(self, idx):        x = self.encoder(idx)        if idx % 15 == 0:            y = [1,0,0,0]        elif idx % 5 ==0:            y = [0,1,0,0]        elif idx % 3 == 0:            y = [0,0,1,0]        else:            y = [0,0,0,1]        return x,y           def __len__(self):        return self.end - self.start The implementation of a custom dataset uses brand new dataclasses from Python 3.7. dataclasses help to eliminate boilerplate code for Python magic functions, such as __init__, using dynamic code generation. This needs the code to be type-hinted and that's what the first three lines inside the class are for. You can read more about dataclasses in the official documentation of Python (https://docs.python.org/3/library/dataclasses.html). The __len__ function returns the difference between the end and start values passed to the class. In the fizzbuzz dataset, the data is generated by the program. The implementation of data generation is inside the __getitem__ function, where the class instance generates the data based on the index passed by DataLoader. PyTorch made the class abstraction as generic as possible such that the user can define what the data loader should return for each id. In this particular case, the class instance returns input and output for each index, where, input, x is the binary-encoder version of the index itself and output is the one-hot encoded output with four states. The four states represent whether the next number is a multiple of three (fizz), or a multiple of five (buzz), or a multiple of both three and five (fizzbuzz), or not a multiple of either three or five. Note: For Python newbies, the way the dataset works can be understood by looking first for the loop that loops over the integers, starting from zero to the length of the dataset (the length is returned by the __len__ function when len(object) is called). The following snippet shows the simple loop: dataset = FizBuzDataset()for i in range(len(dataset)):    x, y = dataset[i]dataloader = DataLoader(dataset, batch_size=10, shuffle=True,                     num_workers=4)for batch in dataloader:    print(batch) The DataLoader class accepts a dataset class that is inherited from torch.utils.data.Dataset. DataLoader accepts dataset and does non-trivial operations such as mini-batching, multithreading, shuffling, and so on, to fetch the data from the dataset. It accepts a dataset instance from the user and uses the sampler strategy to sample data as mini-batches. The num_worker argument decides how many parallel threads should be operating to fetch the data. This helps to avoid a CPU bottleneck so that the CPU can catch up with the GPU's parallel operations. Data loaders allow users to specify whether to use pinned CUDA memory or not, which copies the data tensors to CUDA's pinned memory before returning it to the user. Using pinned memory is the key to fast data transfers between devices, since the data is loaded into the pinned memory by the data loader itself, which is done by multiple cores of the CPU anyway. Most often, especially while prototyping, custom datasets might not be available for developers and in such cases, they have to rely on existing open datasets. The good thing about working on open datasets is that most of them are free from licensing burdens, and thousands of people have already tried preprocessing them, so the community will help out. PyTorch came up with utility packages for all three types of datasets with pretrained models, preprocessed datasets, and utility functions to work with these datasets. This article is about how to build a basic pipeline for deep learning development. The system we defined here is a very common/general approach that is followed by different sorts of companies, with slight changes. The benefit of starting with a generic workflow like this is that you can build a really complex workflow as your team/project grows on top of it. Build deep learning workflows and take deep learning models from prototyping to production with PyTorch Deep Learning Hands-On written by Sherin Thomas and Sudhanshu Passi. F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 12796