Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-numpy-drops-python-2-support-now-you-need-python-3-5-or-later
Prasad Ramesh
17 Dec 2018
2 min read
Save for later

NumPy drops Python 2 support. Now you need Python 3.5 or later.

Prasad Ramesh
17 Dec 2018
2 min read
In a GitHub pull request last week, the NumPy community decided to remove support for Python 2.7. Python 3.4 support will also be dropped with this pull request. So now, to use NumPy 1.17 and newer versions, you will need Python 3.5 or later. NumPy has been supporting both Python versions since 2010. This move doesn't come as a surprise with the Python core team itself dropping support for Python 2 in 2020. The NumPy team had mentioned that this move comes in “Python 2 is an increasing burden on our limited resources”. The discussion to drop Python 2 support in NumPy started almost a year ago. Running pip install numpy on Python 2 will still install the last working version. But here on now, it may not contain the latest features as released for Python 3.5 or higher. However, NumPy on Python 2 will still be supported until December 31, 2019. After January 1, 2020, it may not contain the newest bug fixes. The Twitter audience sees this as a welcome move: https://twitter.com/TarasNovak/status/1073262599750459392 https://twitter.com/esc___/status/1073193736178462720 A comment on Hacker News reads: “Let's hope this move helps with the transitioning to Python 3. I'm not a Python programmer myself, but I'm tired of things getting hairy on Linux dependencies written in Python. It almost seems like I always got to have a Python 2 and a Python 3 version of some packages so my system doesn't break.” Another one reads: “I've said it before, I'll say it again. I don't care for everything-is-unicode-by-default. You can take my Python 2 when you pry it from my cold dead hands.” Some researchers who use NumPy and SciPy stick Python 2, this move from the NumPy team will help in getting everyone to work on a single version. One single supported version will sure help with the fragmentation. Often, Python developers find themselves in a situation where they have one version installed and a specific module is available/works properly in another version. Some also argue about stability, that Python 2 has greater stability and x or y feature. But the general sentiment is more supportive of adopting Python 3. Introducing numpywren, a system for linear algebra built on a serverless architecture NumPy 1.15.0 release is out! Implementing matrix operations using SciPy and NumPy  
Read more
  • 0
  • 0
  • 13571

article-image-whats-new-in-wireshark-2-6
Savia Lobo
10 May 2018
2 min read
Save for later

What's new in Wireshark 2.6 ?

Savia Lobo
10 May 2018
2 min read
In less than ten months of Wireshark’s last release, the Wireshark community has now released Wireshark 2.6. Wireshark is one of the popular tools to analyze traffic over a network interface or a network stream. It is used for troubleshooting, analysis, development and education. Wireshark is based on the Gerald Combs-initiated "Ethereal" project, released under the terms of the GNU General Public License (GNU GPL). Wireshark 2.6 is released with numerous innovations, improvements and bug fixes. The highlight of Wireshark 2.6 is that, it is the last release that will support the legacy (GTK+) user interface. It will not be supported or available in Wireshark 3.0. Major improvements since 2.5, the last version, include: This version now supports HTTP Request sequences. Support for MaxMind DB files, GeoIP and GeoLite Legacy databases has been removed. Windows packages are now built using Microsoft Visual Studio 2017. The IP map feature (the “Map” button in the “Endpoints” dialog) has been removed. Some other improvements since the version 2.4 Display filter buttons can now be edited, disabled, and removed via a context menu directly from the toolbar Support for hardware-timestamping of packets has been added Application startup time has been reduced. Some keyboard shortcut mix-ups have been resolved by assigning new shortcuts to Edit → Copy methods New Protocol Support: Many protocols have been added including the following. ActiveMQ Artemis Core Protocol: This supports interceptors to intercept packets entering and exiting the server. Bluetooth Mesh Protocol : This allows (Bluetooth Low Energy) BLE devices to network together to carry data back to a gateway device, where it can be further routed to the internet. Steam In-Home Streaming discovery protocol: This allows one to use input and output on a single computer, and lets another computer actually handle the rendering, calculations, networking etc. Bug Fix: Dumpcap, a network traffic dump tool which lets one capture packet data from a live network and write the packets to a file, might not quit if Wireshark or TShark crashes. (Bug 1419) To know more about the updates in detail, read Wireshark 2.6.0 Release Notes What is Digital Forensics? Microsoft Cloud Services get GDPR Enhancements IoT Forensics: Security in an always connected world where things talk
Read more
  • 0
  • 0
  • 13056

article-image-unity-switches-to-webassembly-as-the-output-format-for-the-unity-webgl-build-target
Sugandha Lahoti
16 Aug 2018
2 min read
Save for later

Unity switches to WebAssembly as the output format for the Unity WebGL build target

Sugandha Lahoti
16 Aug 2018
2 min read
With the launch of Unity 2018.2 release last month, Unity is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. WebAssembly support was first teased in Unity 5.6 as an experimental feature. Unity 2018.1 marked the removal of the experimental label. And finally in 2018.2, Web Assembly replaces asm.js as the default linker target. Source: Unity Blog WebAssembly replaced asm.js because it is faster, smaller and more memory-efficient, which are all pain points of the Unity WebGL export. A WebAssembly file is a binary file (which is a more compact way to deliver code), as opposed to asm.js, which is text. In addition, code modules that have already been compiled can be stored into an IndexedDB cache, resulting in a really fast startup when reloading the same content. In WebAssembly, the code size for an empty project is ~12% smaller or ~18% if 3D physics is included. Source: Unity Blog WebAssembly also has its own instruction set. In Development builds, it adds more precise error-detection in arithmetic operations. In non-development builds, this kind of detection of arithmetic errors is masked, so the user experience is not affected. Asm.js added a restriction on the size of the Unity Heap; its size had to be specified at build-time and could never change. WebAssembly enables the Unity Heap size to grow at runtime, which lets Unity content memory-usage exceed the initial heap size. Unity is now working on multi-threading support, which will initially be released as an experimental feature and will be limited to internal native threads (no C# threads yet). Debugging hasn’t got any better. While browsers have begun to provide WebAssembly debugging in their devtools suites, these debuggers do not yet scale well to Unity3D sizes of content. What’s next to come Unity is still working on new features and optimizations to improve startup times and performance: Asynchronous instantiation Structured cloning, which allows compiled WebAssembly to be cached in the browser Baseline and tiered compilation, to speed-up instantiation Streaming instantiation to compile Assembly code while downloading it Multi-Threading You can read the full details on the Unity Blog. Unity 2018.2: Unity release for this year second time in a row! GitHub for Unity 1.0 is here with Git LFS and file locking support What you should know about Unity 2018 Interface
Read more
  • 0
  • 0
  • 12065
Banner background image

article-image-gnome-3-32-released-with-fractional-scaling-improvements-to-desktop-web-and-much-more
Amrata Joshi
14 Mar 2019
3 min read
Save for later

GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more

Amrata Joshi
14 Mar 2019
3 min read
Yesterday, the team at GNOME released the latest version of GNOME 3, GNOME 3.32, a free open-source desktop environment for Unix-like operating systems. This release comes with improvements to desktop, web and much more. What’s new in GNOME 3.32? Fractional Scaling Fractional scaling is available as an experimental option that includes several fractional values with good visual quality on any given monitor. This feature is a major enhancement for the GNOME desktop. It requires manually adding scale-monitor-framebuffer to the settings keyorg.gnome.mutter.experimental-features. Improved data structures in GNOME desktop This release comes with improvements to foundation data structures in the GNOME Desktop for faster and snappier feel to the animations, icons and top shell panel. The search database has been improved which helps in searching faster. Even the on-screen keyboard has been improved, it now supports an emoji chooser. New automation mode in the GNOME Web GNOME Web now comes with a new automation mode which allows the application to be controlled by WebDriver. The reader mode has been enhanced now that features a set of customizable preferences and an improved style. With this release, the touchpad users can now take advantage of more gestures while browsing. For example, swipe left or right to go back or forward through browsing history. New settings for permissions Settings come with a new “Application Permissions” panel that shows resources and permissions for various applications, including installed Flatpak applications. Users can now grant permissions to certain resources when requested by the application. The Sound settings have been enhanced for supporting a vertical layout and an intuitive placement of options. With this release, the night light color temperature can now be adjusted for a warmer or cooler setting. GNOME Boxes GNOME Boxes tries to enable 3D acceleration for virtual machines if both the guest and host support it. This leads to better performance of graphics-intensive guest applications such as games and video editors. Application Management from multiple sources This release can handle apps available from multiple sources, such as Flatpak and distribution repositories. With this release, Flatpak app entries now can list the permissions required on the details page. This will give users a comprehensive understanding of what data the software will need access to. Even browsing application details will get faster now with the new XML parsing library used in this release. To know more about this release, check out the official announcement. GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32 GNOME 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes  
Read more
  • 0
  • 0
  • 11975

article-image-gcc-9-will-come-with-improved-diagnostics-simpler-c-errors-and-much-more
Amrata Joshi
11 Mar 2019
2 min read
Save for later

GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more

Amrata Joshi
11 Mar 2019
2 min read
Just two months ago, the team behind GCC (GNU Compiler Collection) made certain changes to GCC 9.1. And Last week, the team released GCC 9.1 with improved diagnostics, location and simpler C++ errors.  What’s new in GCC 9.1? Changes to diagnostics The team added a left-hand margin that shows line numbers. GCC 9.1 now has a new look for the diagnostics. The diagnostics can label regions of the source code in order to show relevant information. The diagnostics come with left-hand and right-hand sides of the “+” operator, so GCC highlights them inline. The team has added a JSON output format such that GCC 9.1 now has a machine-readable output format for diagnostics. C++ errors  The compiler usually has to consider several functions while dealing with C++ at a given call site and reject all of them for different reasons. Also, the g++‘s error messages need to be handled and a specific reason needs to be given for rejecting each function. This makes simple cases difficult to read. This release comes with a  special-casing to simplify g++ errors for common cases. Improved C++ syntax in GCC 9.1 The major issue within GCC’s internal representation is that not every node within the syntax tree has a source location. For GCC 9.1, the team has worked to solve this problem so that most of the places in the C++ syntax tree now retain location information for longer. Users can now emit optimization information GCC 9.1 can now automatically vectorize loops and reorganize them to work on multiple iterations at once. Users will now have an option, -fopt-info, that will help in emitting optimization information. Improved runtime library in GCC 9.1 This release comes with improved experimental support for C++17, including <memory_resource>. There will also be a support for opening file streams with wide character paths on Windows. Arm specific This release comes with support for the deprecated Armv2 and Armv3 architectures and their variants have been removed. Support for the Armv5 and Armv5E architectures has also been removed. To know more about this news, check out RedHat’s blog post. DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more The D language front-end support finally merged into GCC 9 GCC 8.1 Standards released!
Read more
  • 0
  • 0
  • 11945

article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 11757
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-spotify-has-one-of-the-most-intricate-uses-of-javascript-in-the-world-says-former-engineer
Richard Gall
19 Jul 2018
3 min read
Save for later

Spotify has "one of the most intricate uses of JavaScript in the world," says former engineer

Richard Gall
19 Jul 2018
3 min read
A former Spotify engineer, Mattias Peter Johansson, has outlined how the music streaming platform uses JavaScript on it's desktop application. It's complicated and, according to Reddit, "kind of insane". Responding to a question on Quora, Johansson says it could be "among the top 25 most intricate uses of JavaScript in the world." What's particularly interesting is how this intricate JavaScript has influenced the Spotify architecture and the way the development teams are organized. How JavaScript is used on the Spotify desktop app JavaScript is used across the Spotify desktop client. Wherever UI is concerned, it uses JavaScript. C++ is used for functionality beneath the UI, with JavaScript sitting on top of it. The languages are connected by an interface aptly called a 'bridge.' Spotify's squads and spotlets The Spotify team is made up of small squads of anywhere from 3 to 12 people. Johansson explains that  "a feature is generally owned by a single squad, and during normal conditions the squad has all it needs to develop and maintain its feature." Each team has as many backend, front end, and mobile developers as necessary for the particular feature it owns. These features are known as 'spotlets.' Each of these spotlets are essentially web apps that come together to power the desktop app's UI. Johansson explains how they work, saying: They all run inside Chromium Embedded Framework, each app living within their own little iframe, which gives squads the ability to work with whatever frameworks they need, without the need to coordinate tooling and dependencies with other squads. The advantage of this is that it makes technical decision making much easier. As Johansson explains, "introducing a library is a discussion between a few people instead of decision that involves ~100 people and their various needs." Shared functionalities across the Spotify development team Although spotlets and squads create a somewhat fragmented picture of a development team, things are unified. "The latest versions of all Spotlets are zipped and bundled with the desktop client binary on every release, assets and all," says Johansson. Individual spotlets are also sometimes released where an emergency fix might be needed. Although tooling decisions are left up to individual squads, there are a couple of tools that are used across the team. This includes GLUE, a CSS framework that allows some coordination and alignment in terms of design. The team also rely heavily on npm, as you might expect. "We have our own internal npm repository where we publish internal modules, and we package the code together using a Browserify-like tool."
Read more
  • 0
  • 1
  • 11561

article-image-introducing-voila-that-turns-your-jupyter-notebooks-to-standalone-web-applications
Bhagyashree R
13 Jun 2019
3 min read
Save for later

Introducing Voila that turns your Jupyter notebooks to standalone web applications

Bhagyashree R
13 Jun 2019
3 min read
Last week, a Jupyter Community Workshop on dashboarding was held in Paris. At the workshop, several contributors came together to build the Voila package, the details of which QuantStack shared yesterday. Voila serves live Jupyter notebooks as standalone web applications providing a neat way to share your work results with colleagues. Why do we need Voila? Jupyter notebooks allow you to do something called “literature programming” in which human-friendly explanations are accompanied with code blocks. It allows scientists, researchers, and other practitioners of scientific computing to add theory behind their code including mathematical equations. However, Jupyter notebooks may prove to be a little bit problematic when you plan to communicate your results with other non-technical stakeholders. They might get put-off by the code blocks and also the need for running the notebook to see the results. It also does not have any mechanism to prevent arbitrary code execution by the end user. How Voila works? Voila addresses all the aforementioned queries by converting your Jupyter notebook to a standalone web application. After connecting to a notebook URL, Voila launches the kernel for that notebook and runs all the cells. Once the execution is complete, it does not shut down the kernel. The notebook gets converted to HTML and is served to the user. This rendered HTML includes JavaScript that is responsible for initiating a websocket connection with the Jupyter kernel. Here’s a diagram depicting how it works: Source: Jupyter Blog Following are the features Voila provides: Renders Jupyter interactive widgets: It supports Jupyter widget libraries including bqplot, ipyleafet, ipyvolume, ipympl, ipysheet, plotly, and ipywebrtc. Prevents arbitrary code execution: It does not allow arbitrary code execution by consumers of dashboards. A language-agnostic dashboarding system: Voila is built upon Jupyter standard protocols and file formats enabling it to work with any Jupyter kernel (C++, Python, Julia). Includes custom template system for better extensibility: It provides a flexible template system to produce rich application layouts. Many Twitter users applauded this new way of creating live and interactive dashboards from Jupyter notebooks: https://twitter.com/philsheard/status/1138745404772818944 https://twitter.com/andfanilo/status/1138835776828071936 https://twitter.com/ToluwaniJohnson/status/1138866411261124608 Some users also compared it with another dashboarding solution called Panel. The main difference between Panel and Voila is that Panel supports Bokeh widgets whereas Voila is framework and language agnostic. “Panel can use a Bokeh server but does not require it; it is equally happy communicating over Bokeh Server's or Jupyter's communication channels. Panel doesn't currently support using ipywidgets, nor does Voila currently support Bokeh plots or widgets, but the maintainers of both Panel and Voila have recently worked out mechanisms for using Panel or Bokeh objects in ipywidgets or using ipywidgets in Panels, which should be ready soon,” a Hacker News user commented. To read more in detail about Voila, check out the official announcement on the Jupyter Blog. JupyterHub 1.0 releases with named servers, support for TLS encryption and more Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts JupyterLab v0.32.0 releases
Read more
  • 0
  • 0
  • 11409

article-image-github-now-allows-issue-transfer-between-repositories-a-public-beta-version
Savia Lobo
01 Nov 2018
3 min read
Save for later

GitHub now allows issue transfer between repositories; a public beta version

Savia Lobo
01 Nov 2018
3 min read
Yesterday, GitHub announced that repository admins can now transfer issues from one repository to another better fitting repository, to help those issues find their home. This project by GitHub is currently is in public beta version. Nat Friedman, CEO of GitHub, in his tweet said, “We've just shipped the ability to transfer an issue from one repo to another. This is one of the most-requested GitHub features. Feels good!” When the user transfers an issue, the comments, assignees, and issue timeline events are retained. The issue's labels, projects, and milestones are not retained, although users can see past activity in the issue's timeline. People or teams who are mentioned in the issue will receive a notification letting them know that the issue has been transferred to a new repository. The original URL redirects to the new issue's URL. People who don't have read permissions in the new repository will see a banner letting them know that the issue has been transferred to a new repository that they can't access. Permission levels for issue transfer between repositories People with an owner or team maintainer roles can manage repository access with teams. Each team can have different repository access permissions. There are three types of repository permissions, i.e. Read, Write, and Admin, available for people or teams collaborating on repositories that belong to an organization. To transfer an open issue to another repository, the user needs to have admin permissions on the repository the issue is in and the repository where the issue is to be transferred. If the issue is being transferred from a repository that's owned by an organization, you are a member of, you must transfer it to another repository within your organization. To know more about the repository permission levels visit GitHubHelp blog post. Steps to transfer an Open issue to another repository On GitHub, navigate to the main page of the repository. Under your repository name, click  Issues. In the list of issues, click the issue you'd like to transfer. In the right sidebar, click Transfer this issue. 5. In "Choose a repository," select the repository you want to transfer the issue to. 6. Click Transfer issue. GitHub Business Cloud is now FedRAMP authorized GitHub updates developers and policymakers on EU copyright Directive at Brussels GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage    
Read more
  • 0
  • 0
  • 11348

article-image-home-assistant-an-open-source-python-home-automation-hub-to-rule-all-things-smart
Prasad Ramesh
25 Aug 2018
2 min read
Save for later

Home Assistant: an open source Python home automation hub to rule all things smart

Prasad Ramesh
25 Aug 2018
2 min read
We have Amazon Alexa, Google Home and Phillips Hue for smart actions in your home. But they are individual and require different controls. What if all of your smart devices can work together with a master hub? That is Home Assistant. Home assistant is an automation platform that can run on Raspberry Pi. It acts as a central hub for connecting and automating all your smart devices. It supports services like IFTTT, Pushbullet, Google cast, and many others. Currently there are over a thousand components supported. It tracks the state of all the installed smart devices in your home. All the devices can be controlled from a single, mobile-friendly, interface. For security and privacy, all operations via Home Assistant are done locally, meaning no data is stored on the cloud. The Home assistant website advertises functions like having lights turn on upon sunset, dimming lights when you watch a movie on Chromecast. There is a virtual image called Hass.io which is an all in one solution and get started with Home Assistant. There is a guide is to install Hass.io on a Raspberry Pi. The requirements for running Home Assistant are: Raspberry Pi 3 Model B+ + Power Supply (at least 2.5A) A Class 10 or higher, Size 32 GB or bigger Micro SD card An SD Card reader Ethernet cable (optional, Hass.io can work with WiFi) For unattended configuration, optionally a USB-Stick Home assistant is a hub, it cannot control anything on its own. Think of it as a hub that passes instructions, a master device that communicates with other devices for home automation. Home assistant can’t do anything if there are no smart devices to work with. Since it is open source, there are dozens of contributions from tinkerers and DIY enthusiasts worldwide. You can check out the automation examples to know more and use them. The installation is very simple and there is a friendly UI to control your automation tasks. There is plenty of information at the Home Assistant website to get your started. They also have a GitHub repository. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Apple joins the Thread Group, signalling its Smart Home ambitions with HomeKit, Siri and other IoT products Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 1
  • 11269
article-image-macos-gets-rpcs3-and-dolphin-using-gfx-portability-the-vulkan-portability-implementation-for-non-rust-apps
Melisha Dsouza
05 Sep 2018
2 min read
Save for later

macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps

Melisha Dsouza
05 Sep 2018
2 min read
The Vulkan Portability implementation, gfx-portability allows non-Rust applications that use Vulkan to run with ease. After improving the functionality of gfx-portability’s Metal backend through benchmarking Dota2, and verifying certain functionalities through the Vulkan Conformance Test Suite (CTS), developers are now planning to expand their testing to other projects that are open source, already using Vulcan for rendering and finally lacking strong macOS/Metal support. The projects which matched their criteria were  RPCS3 and Dolphin. However, the team discovered various issues with both RPCS3 and Dolphin projects. RPCS3 Blockers RPCS3 satisfies all the above mentioned criteria. It is an open-source Sony PlayStation 3 emulator and debugger written in C++ for Windows and Linux. RPCS3 has a Vulkan backend, and some attempts were made to support macOS previously. The gfx-rs team added a surface and swapchain support to start of with the macOS integration. This process identified a number of blockers in both gfx-rs and RPCS3. The RPCS3 developers and the gfx-rs teams collaborated to quickly address the blockers. Once the blockers were addressed, gameplay was rendered within RPCS3. Dolphin support for macOS Dolphin, the emulator for two recent Nintendo video game consoles, was actively working on adding support for macOS. While being tested with gfx-portability the teams noticed some further minor bugs in gfx. The issues were addressed and the teams were able to render real gameplay. Continuous Releases for the masses The team has already started automatically releasing gfx-portability binaries under GitHub latest release -> the portability repository. Currently the team provides MacOS (Metal) and Linux (Vulkan) binaries, and will add Windows (Direct3D 12/11 and Vulkan) binaries soon. These releases ensure that users don’t have to build gfx-portability themselves in order to test it with an existing project. The binaries are compatible with both the Vulkan loader on macOS and by linking the binaries directly from an application.   The team was successfully able to run RPCS3 and Dolphin on top of gfx-portability’s Metal backend and only had to address some minor issues in the process. Stability and performance will improve as more real world use cases are tested. You can read more about this on gfx-rs.github.io.   OpenAI Five loses against humans in Dota 2 at The International 2018 How to use artificial intelligence to create games with rich and interactive environments [Tutorial] Best game engines for AI game development  
Read more
  • 0
  • 0
  • 11160

article-image-microsofts-net-core-2-1-now-powers-bing-com
Melisha Dsouza
21 Aug 2018
4 min read
Save for later

Microsoft’s .NET Core 2.1 now powers Bing.com

Melisha Dsouza
21 Aug 2018
4 min read
Microsoft is ever striving to make its products run better. They can add yet another accomplishment to their list as Microsoft’s cloud service search engine, Bing is now running fully on .NET Core 2.1, as announced by the .NET engineering team in their blog yesterday. .NET Core is the slimmed down and cross-platform version of Microsoft’s .NET managed common language runtime. Since Bing runs on thousands of servers spanning many data centers across the globe, .NET Core will serve as the perfect platform for it to function on. Why did Bing migrate to .NET Core 2.1? Bing has always run on the .NET Framework, but has been able to move to .NET Core 2.1 after some recent API additions. Let’s take a look at the main reasons for Bing.com’s migration to .NET Core. 1. Performance i.e. serving latency .NET Core 2.1 has led to an improvement in performance in virtually all areas of the runtime and libraries. The internal server latency over the last few months shows a striking 34% improvement. Check out the graph for a clear picture!     Souce: blog.msdn.microsoft.com The following changes in .NET Core 2.1 are the reasons why the workload and performance has greatly improved- #1 Vectorization of string.Equals & string.IndexOf/LastIndexOf HTML rendering and manipulation are string-heavy workloads. Vectorization of String comparisons and indexing operations (major components of string slicing) is the biggest contributor to the performance improvement. You can find more information on this on the github page for  Vectorization of string.Equals and string.IndexOf/LastIndexOf #2 Devirtualization Support for EqualityComparer<T>.Default One of .NET core’s major software components is a heavy user of Dictionary<int/long, V>, which indirectly benefits from the intrinsic recognition work that was done in the JIT to make Dictionary<K, V> amenable to that optimization.  Head over to the github page for more clarity on why this feature empowers .NET Core 2.1 #3 Software Write Watch for Concurrent GC This led to a reduction in CPU usage. The implementation relies on a JIT Write Barrier, which instinctively increases the cost of a reference store, but that cost is amortized and not noticed in the workload. #4 Methods with calli are now inline-able ldftn + calli  are used in lieu of delegates (which incur an object allocation) in performance-critical pieces of code where there is a need to call a managed method indirectly. This change allowed method bodies with a calli instruction to be eligible for inlining. The github page provides more insight on this subject. #5 Improve performance of string.IndexOfAny for 2 & 3 char searches A common operation in a front-end stack is search for ‘:’, ‘/’, ‘/’ in a string to delimit portions of a URL. Check out this special-casing improvement that was beneficial throughout the codebase on the github page. 2. Runtime Agility The ability to have an xcopy version of the runtime inside their application denotes that they can adopt newer versions of the runtime at a much faster pace. The Continuous integration (CI) pipeline is run with .NET Core’s daily CI and it builds testing functionality and performance all the way through the release. 3. ReadyToRun Images Managed applications usually can have poor startup performance as methods first have to be JIT compiled to machine code. .NET Framework has a precompilation technology, NGEN. On .NET Core, the crossgen tool allows the code to be precompiled as a pre-deployment step, such as in the build lab, and the images deployed to production are Ready To Run! This feature was not supported on the previous  .NET implementation. The .NET Core team is striving to provide Bing.com users fast results. The latest software and technologies used by their developers will ensure that .NET Core will not fail Bing.com! Read the detailed overview of the article on Microsoft's blog. Say hello to FASTER: a new key-value store for large state management by Microsoft Microsoft Azure’s new governance DApp: An enterprise blockchain without mining .NET Core completes move to the new compiler – RyuJIT
Read more
  • 0
  • 0
  • 11148

article-image-virtual-reality-solar-system-unity-google-cardboard
Sugandha Lahoti
25 Apr 2018
21 min read
Save for later

Build a Virtual Reality Solar System in Unity for Google Cardboard

Sugandha Lahoti
25 Apr 2018
21 min read
In today's tutorial, we will feature visualization of a newly discovered solar system. We will leverage the virtual reality development process for this project in order to illustrate the power of VR and ease of use of the Unity 3D engine. This project is dioramic scene, where the user floats in space, observing the movement of planets within the TRAPPIST-1 planetary system. In February 2017, astronomers announced the discovery of seven planets orbiting an ultra-cool dwarf star slightly larger than Jupiter. We will use this information to build a virtual environment to run on Google Cardboard (Android and iOS) or other compatible devices: We will additionally cover the following topics: Platform setup: Download and install the platform-specific software needed to build an application on your target device. Experienced mobile developers with the latest Android or iOS SDK may skip this step. Google Cardboard setup: This package of development tools facilitates display and interaction on a Cardboard device. Unity environment setup: Initializing Unity's Project Settings in preparation for a VR environment. Building the TRAPPIST-1 system: Design and implement the Solar System project. Build for your device: Build and install the project onto a mobile device for viewing in Google Cardboard. Platform setup Before we begin building the solar system, we must setup our computer environment to build the runtime application for a given VR device. If you have never built a Unity application for Android or iOS, you will need to download and install the Software Development Kit (SDK) for your chosen platform. An SDK is a set of tools that will let you build an application for a specific software package, hardware platform, game console, or operating system. Installing the SDK may require additional tools or specific files to complete the process, and the requirements change from year to year, as operating systems and hardware platforms undergo updates and revisions. To deal with this nightmare, Unity maintains an impressive set of platform-specific instructions to ease the setup process. Their list contains detailed instructions for the following platforms: Apple Mac Apple TV Android iOS Samsung TV Standalone Tizen Web Player WebGL Windows For this project, we will be building for the most common mobile devices: Android or iOS. The first step is to visit either of the following links to prepare your computer: Android: Android users will need the Android Developer Studio, Java Virtual Machine (JVM), and assorted drivers. Follow this link for installation instructions and files: https://docs.unity3d.com/Manual/Android-sdksetup.html. Apple iOS: iOS builds are created on a Mac and require an Apple Developer account, and the latest version of Xcode development tools. However, if you've previously built an iOS app, these conditions will have already been met by your system. For the complete instructions, follow this link: https://docs.unity3d.com/Manual/iphone-GettingStarted.html. Google Cardboard setup Like the Unity documentation website, Google also maintains an in-depth guide for the Google VR SDK for Unity set of tools and examples. This SDK provides the following features on the device: User head tracking Side-by-side stereo rendering Detection of user interactions (via trigger or controller) Automatic stereo configuration for a specific VR viewer Distortion correction Automatic gyro drift correction These features are all contained in one easy-to-use package that will be imported into our Unity scene. Download the SDK from the following link, before moving on to the next step: http://developers.google.com/cardboard/unity/download. At the time of writing, the current version of the Google VR SDK for Unity is version 1.110.1 and it is available via a GitHub repository. The previous link should take you to the latest version of the SDK. However, when starting a new project, be sure to compare the SDK version requirements with your installed version of Unity. Setting up the Unity environment Like all projects, we will begin by launching Unity and creating a new project. The first steps will create a project folder which contains several files and directories: Launch the Unity application. Choose the New option after the application splash screen loads. Create a new project by launching the Unity application. Save the project as Trappist1 in a location of your choice, as demonstrated in Figure 2.2: To prepare for VR, we will adjust the Build Settings and Player Settings windows. Open Build Settings from File | Build Settings. Select the Platform for your target device (iOS or Android). Click the Switch Platform button to confirm the change. The Unity icon in the right-hand column of the platform panel indicates the currently selected build platform. By default, it will appear next to the Standalone option. After switching, the icon should now be on Android or iOS platform, as shown in Figure 2.3: Note for Android developers: Ericsson Texture Compression (ETC) is the standard texture compression format on Android. Unity defaults to ETC (default), which is supported on all current Android devices, but it does not support textures that have an alpha channel. ETC2 supports alpha channels and provides improved quality for RBG textures on Android devices that support OpenGL ES 3.0. Since we will not need alpha channels, we will stick with ETC (default) for this project: Open the Player Settings by clicking the button at the bottom of the window. The PlayerSetting panel will open in the Inspector panel. Scroll down to Other Settings (Unity 5.5 thru 2017.1) or XR Settings and check the Virtual Reality Supported checkbox. A list of choices will appear for selecting VR SDKs. Add Cardboard support to the list, as shown in Figure 2.4: You will also need to create a valid Bundle Identifier or Package Name under Identification section of Other Settings. The value should follow the reverse-DNS format of the com.yourCompanyName.ProjectName format using alphanumeric characters, periods, and hyphens. The default value must be changed in order to build your application. Android development note: Bundle Identifiers are unique. When an app is built and released for Android, the Bundle Identifier becomes the app's package name and cannot be changed. This restriction and other requirements are discussed in this Android documentation link: http://developer.Android.com/reference/Android/content/pm/PackageInfo.html. Apple development note: Once you have registered a Bundle Identifier to a Personal Team in Xcode, the same Bundle Identifier cannot be registered to another Apple Developer Program team in the future. This means that, while testing your game using a free Apple ID and a Personal Team, you should choose a Bundle Identifier that is for testing only, you will not be able to use the same Bundle Identifier to release the game. An easy way to do this is to add Test to the end of whatever Bundle Identifier you were going to use, for example, com.MyCompany.VRTrappistTest. When you release an app, its Bundle Identifier must be unique to your app, and cannot be changed after your app has been submitted to the App Store. Set the Minimum API Level to Android Nougat (API level 24) and leave the Target API on Automatic. Close the Build Settings window and save the project before continuing. Choose Assets | Import Package | Custom Package... to import the GoogleVRForUnity.unitypackage previously downloaded from http://developers.google.com/cardboard/unity/download. The package will begin decompressing the scripts, assets, and plugins needed to build a Cardboard product. When completed, confirm that all options are selected and choose Import. Once the package has been installed, a new menu titled GoogleVR will be available in the main menu. This provides easy access to the GoogleVR documentation and Editor Settings. Additionally, a directory titled GoogleVR will appear in the Project panel: Right-click in the Project and choose Create | Folder to add the following directories: Materials, Scenes, and Scripts. Choose File | Save Scenes to save the default scene. I'm using the very original Main Scene and saving it to the Scenes folder created in the previous step. Choose File | Save Project from the main menu to complete the setup portion of this project. Building the TRAPPIST-1 System Now that we have Unity configured to build for our device, we can begin building our space themes VR environment. We have designed this project to focus on building and deploying a VR experience. If you are moderately familiar with Unity, this project will be very simple. Again, this is by design. However, if you are relatively new, then the basic 3D primitives, a few textures, and a simple orbiting script will be a great way to expand your understanding of the development platform: Create a new script by selecting Assets | Create | C# Script from the main menu. By default, the script will be titled NewBehaviourScript. Single click this item in the Project window and rename it OrbitController. Finally, we will keep the project organized by dragging OrbitController's icon to the Scripts folder. Double-click the OrbitController script item to edit it. Doing this will open a script editor as a separate application and load the OrbitController script for editing. The following code block illustrates the default script text: using System.Collections; using System.Collections.Generic; using UnityEngine; public class OrbitController : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } This script will be used to determine each planet's location, orientation, and relative velocity within the system. The specific dimensions will be added later, but we will start by adding some public variables. Starting on line 7, add the following five statements: public Transform orbitPivot; public float orbitSpeed; public float rotationSpeed; public float planetRadius; public float distFromStar; Since we will be referring to these variables in the near future, we need a better understanding of how they will be used: orbitPivot stores the position of the object that each planet will revolve around (in this case, it is the star TRAPPIST-1). orbitalSpeed is used to control how fast each planet revolves around the central star. rotationSpeed is how fast an object rotates around its own axis. planetRadius represents a planet's radius compared to Earth. This value will be used to set the planet's size in our environment. distFromStar is a planet's distance in Astronomical Units (AU) from the central star. Continue by adding the following lines of code to the Start() method of the OrbitController script: // Use this for initialization void Start () { // Creates a random position along the orbit path Vector2 randomPosition = Random.insideUnitCircle; transform.position = new Vector3 (randomPosition.x, 0f, randomPosition.y) * distFromStar; // Sets the size of the GameObject to the Planet radius value transform.localScale = Vector3.one * planetRadius; } As shown within this script, the Start() method is used to set the initial position of each planet. We will add the dimensions when we create the planets, and this script will pull those values to set the starting point of each game object at runtime: Next, modify the Update() method by adding two additional lines of code, as indicated in the following code block: // Update is called once per frame. This code block updates the Planet's position during each // runtime frame. void Update () { this.transform.RotateAround (orbitPivot.position, Vector3.up, orbitSpeed * Time.deltaTime); this.transform.Rotate (Vector3.up, rotationSpeed * Time.deltaTime); } This method is called once per frame while the program is running. Within Update(), the location for each object is determined by computing where the object should be during the next frame. this.transform.RotateAround uses the sun's pivot point to determine where the current GameObject (identified in the script by this) should appear in this frame. Then this.transform.Rotate updates how much the planet has rotated since the last frame. Save the script and return to Unity. Now that we have our first script, we can begin building the star and its planets. For this process, we will use Unity's primitive 3D GameObject to create the celestial bodies: Create a new sphere using GameObject | 3D Object | Sphere. This object will represent the star TRAPPIST-1. It will reside in the center of our solar system and will serve as the pivot for all seven planets. Right-click on the newly created Sphere object in the Hierarchy window and select Rename. Rename the object Star. Using the Inspector tab, set the object to Position: 0,0,0 and Scale: 1,1,1. With the Star selected, locate the Add Component button in the Inspector panel. Click the button and enter orbitcontroller in the search box. Double-click on the OrbitController script icon when it appears. The script is now a component of the star. Create another sphere using GameObject | 3D Object | Sphere and position it anywhere in the scene, with the default scale of 1,1,1. Rename the object Planet b. Figure 2.5, from the TRAPPIST-1 Wikipedia page, shows the relative orbital period, distance from the star, radius, and mass of each planet. We will use these dimensions and names to complete the setup of our VR environment. Each value will be entered as public variables for their associated GameObjects: Apply the OrbitController script to the Planet b asset by dragging the script icon to the planet in the Scene window or the Planet b object in the Hierarchy window. Planet b is our first planet and it will serve as a prototype for the rest of the system. Set the Orbit Pivot point of Planet b in the Inspector. Do this by clicking the Selector Target next to the Orbit Pivot field (see Figure 2.6). Then, select Star from the list of objects. The field value will change from None (Transform) to Star (Transform). Our script will use the origin point of the select GameObject as its pivot point. Go back and select the Star GameObject and set the Orbit Pivot to Star as we did with Planet b. Save the scene: Now that our template planet has the OrbitController script, we can create the remaining planets: Duplicate the Planet b GameObject six times, by right-clicking on it and choosing Duplicate. Rename each copy Planet c through Planet h. Set the public variables for each GameObject, using the following chart: GameObject Orbit Speed Rotation Speed Planet Radius Dist From Star Star 0 2 6 0 Planet b .151 5 0.85 11 Planet c .242 5 1.38 15 Planet d .405 5 0.41 21 Planet e .61 5 0.62 28 Planet f .921 5 0.68 37 Planet g 1.235 5 1.34 45 Planet h 1.80 5 0.76 60 Table 2.1: TRAPPIST-1 gameobject Transform settings Create an empty GameObject by right clicking in the Hierarchy panel and selecting Create Empty. This item will help keep the Hierarchy window organized. Rename the item Planets and drag Planet b—through Planet h into the empty item. This completes the layout of our solar system, and we can now focus on setting a location for the stationary player. Our player will not have the luxury of motion, so we must determine the optimal point of view for the scene: Run the simulation. Figure 2.7 illustrates the layout being used to build and edit the scene. With the scene running and the Main Camera selected, use the Move and Rotate tools or the Transform fields to readjust the position of the camera in the Scene window or to find a position with a wide view of the action in the Game window; or a position with an interesting vantage point. Do not stop the simulation when you identify a position. Stopping the simulation will reset the Transform fields back to their original values. Click the small Options gear in the Transform panel and select Copy Component. This will store a copy of the Transform settings to the clipboard: Stop the simulation. You will notice that the Main Camera position and rotation have reverted to their original settings. Click the Transform gear again and select Paste Component Values to set the Transform fields to the desired values. Save the scene and project. You might have noticed that we cannot really tell how fast the planets are rotating. This is because the planets are simple spheres without details. This can be fixed by adding materials to each planet. Since we really do not know what these planets look like we will take a creative approach and go for aesthetics over scientific accuracy. The internet is a great source for the images we need. A simple Google search for planetary textures will result in thousands of options. Use a collection of these images to create materials for the planets and the TRAPPIST-1 star: Open a web browser and search Google for planet textures. You will need one texture for each planet and one more for the star. Download the textures to your computer and rename them something memorable (that is, planet_b_mat...). Alternatively, you can download a complete set of textures from the Resources section of the supporting website: http://zephyr9.pairsite.com/vrblueprints/Trappist1/. Copy the images to the Trappist1/Assets/Materials folder. Switch back to Unity and open the Materials folder in the Project panel. Drag each texture to its corresponding GameObject in the Hierarchy panel. Notice that each time you do this Unity creates a new material and assigns it to the planet GameObject: Run the simulation again and observe the movement of the planets. Adjust the individual planet Orbit Speed and Rotation Speed to feel natural. Take a bit of creative license here, leaning more on the scene's aesthetic quality than on scientific accuracy. Save the scene and the project. For the final design phase, we will add a space themed background using a Skybox. Skyboxes are rendered components that create the backdrop for Unity scenes. They illustrate the world beyond the 3D geometry, creating an atmosphere to match the setting. Skyboxes can be constructed of solids, gradients, or images using a variety of graphic programs and applications. For this project, we will find a suitable component in the Asset Store: Load the Asset Store from the Window menu. Search for a free space-themed skybox using the phrase space skybox price:0. Select a package and use the Download button to import the package into the Scene. Select Window | Lighting | Settings from the main menu. In the Scene section, click on the Selector Target for the Skybox Material and choose the newly downloaded skybox: Save the scene and the project. With that last step complete, we are done with the design and development phase of the project. Next, we will move on to building the application and transferring it to a device. Building the application To experience this simulation in VR, we need to have our scene run on a head-mounted display as a stereoscopic display. The app needs to compile the proper viewing parameters, capture and process head tracking data, and correct for visual distortion. When you consider the number of VR devices we would have to account for, the task is nothing short of daunting. Luckily, Google VR facilitates all of this in one easy-to-use plugin. The process for building the mobile application will depend on the mobile platform you are targeting. If you have previously built and installed a Unity app on a mobile device, many of these steps will have already been completed, and a few will apply updates to your existing software. Note: Unity is a fantastic software platform with a rich community and an attentive development staff. During the writing of this book, we tackled software updates (5.5 through 2017.3) and various changes in the VR development process. Although we are including the simplified building steps, it is important to check Google's VR documentation for the latest software updates and detailed instructions: Android: https://developers.google.com/vr/unity/get-started iOS: https://developers.google.com/vr/unity/get-started-ios Android Instructions If you are just starting out building applications from Unity, we suggest starting out with the Android process. The workflow for getting your project export from Unity to playing on your device is short and straight forward: On your Android device, navigate to Settings | About phone or Settings | About Device | Software Info. Scroll down to Build number and tap the item seven times. A popup will appear, confirming that you are now a developer. Now navigate to Settings | Developer options | Debugging and enable USB debugging. Building an Android application In your project directory (at the same level as the Asset folder), create a Build folder. Connect your Android device to the computer using a USB cable. You may see a prompt asking you to confirm that you wish to enable USB debugging on the device. If so, click OK. In Unity, select File | Build Settings to load the Build dialog. Confirm that the Platform is set to Android. If not choose Android and click Switch Platform. Note that Scenes/Main Scene should be loaded and checked in the Scenes In Build portion of the dialog. If not, click the Add Open Scenes button to add Main Scene to the list of scenes to be included in the build. Click the Build button. This will create an Android executable application with the .apk file extension. Invalid command Android error Some Android users have reported an error relating to the Android SDK Tools location. The problem has been confirmed in many installations prior to Unity 2017.1. If this problem occurs, the best solution is to downgrade to a previous version of the SDK Tools. This can be done by following the steps outlined here: Locate and delete the Android SDK Tools folder [Your Android SDK Root]/tools. This location will depend on where the Android SDK package was installed. For example, on my computer the Android SDK Tools folder is found at C:UserscpalmerAppDataLocalAndroidsdk. Download SDK Tools from http://dl-ssl.google.com/Android/repository/tools_r25.2.5-windows.zip. Extract the archive to the SDK root directory. Re-attempt the Build project process. If this is the first time you are creating an Android application, you might get an error indicating that Unity cannot locate your Android SDK root directory. If this is the case, follow these steps: Cancel the build process and close the Build Settings... window. Choose Edit | Preferences... from the main menu. Choose External Tools and scroll down to Android. Enter the location of your Android SDK root folder. If you have not installed the SDK, click the download button and follow the installation process. Install the app onto your phone and load the phone into your Cardboard device: iOS Instructions The process for building an iOS app is much more involved than the Android process. There are two different types of builds: Build for testing Build for distribution (which requires an Apple Developer License) In either case, you will need the following items to build a modern iOS app: A Mac computer running OS X 10.11 or later The latest version of Xcode An iOS device and USB cable An Apple ID Your Unity project For this demo, we will build an app for testing and we will assume you have completed the Getting Started steps (https://docs.unity3d.com/Manual/iphone-GettingStarted.html) from Section 1. If you do not yet have an Apple ID, obtain one from the Apple ID site (http://appleid.apple.com/). Once you have obtained an Apple ID, it must be added to Xcode: Open Xcode. From the menu bar at the top of the screen, choose Xcode | Preferences. This will open the Preferences window. Choose Accounts at the top of the window to display information about the Apple IDs that have been added to Xcode. To add an Apple ID, click the plus sign at the bottom left corner and choose Add Apple ID. Enter your Apple ID and password in the resulting popup box. Your Apple ID will then appear in the list. Select your Apple ID. Apple Developer Program teams are listed under the heading of Team. If you are using the free Apple ID, you will be assigned to Personal Team. Otherwise, you will be shown the teams you are enrolled in through the Apple Developer Program. Preparing your Unity project for iOS Within Unity, open the Build Settings from the top menu (File | Build Settings). Confirm that the Platform is set to iOS. If not choose iOS and click Switch Platform at the bottom of the window. Select the Build & Run button. Building an iOS application Xcode will launch with your Unity project. Select your platform and follow the standard process for building an application from Xcode. Install the app onto your phone and load the phone into your Cardboard device. We looked at the basic Unity workflow for developing VR experiences. We also provided a stationary solution so that we could focus on the development process. The Cardboard platform provides access to VR content from a mobile platform, but it also allows for touch and gaze controls. You read an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create compelling Virtual Reality experiences for mobile and desktop with three top platforms—Cardboard VR, Gear VR, and OculusVR. Read More Top 7 modern Virtual Reality hardware systems Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive    
Read more
  • 0
  • 2
  • 10992
article-image-git-bug-a-new-distributed-bug-tracker-embedded-in-git
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

Git-bug: A new distributed bug tracker embedded in git

Melisha Dsouza
20 Aug 2018
3 min read
git-bug is a distributed bug tracker that is embedded in git. Using git's internal storage ensures that no files are added in your project. You can push your bugs to the same git remote that you are already using to collaborate with other people. The main idea behind implementing a distributed bug tracker in Git was to stop relying on a web service somewhere to deal with bugs. Browsing and editing bug reports offline wouldn’t be much of a pain, thanks to this implementation. While git-bug addresses a pressing need, note that the project is not yet available for full fledged use and is currently a proof of concept released just 3 days ago at version 0.2.0. Reddit is abuzz with views on the release. A user quotes- Source: reddit.com Certain users also had counter thoughts on the cons of the release - Source: reddit.com   Now that you want to get your hands on git-bug, let’s look at how to get started. Installing git-bug, Linux packages needed and CLI usage for its implementation To install the git-bug, all you need to do is execute the following command- go get github.com/MichaelMure/git-bug If it's not done already, add golang binary directory in your PATH: export PATH=$PATH:$GOROOT/bin:$GOPATH/bin You can set pre-compiled binaries by following 3 simple steps: Head over to the release page and download the appropriate binary for your system. Copy the binary anywhere in your PATH Rename the binary to git-bug (or git-bug.exe on windows) The only linux packge needed for this release is the Archlinux (AUR) Further, you can use the CLI to implement the git-bug using the following commands- Create a new bug: git bug new Your favorite editor will open to write a title and a message. You can push your new entry to a remote: git bug push [<remote>] And pull for updates: git bug pull [<remote>] List existing bugs: git bug ls   Use commands like show, comment, open or close to display and modify bugs. For more details about each command, you can run git bug <command> --help or scan the command's documentation. Features of the git-bug #1 Interactive User Interface for the terminal Use the git bug termui  command to browse and edit bugs. This short video will demonstrate how easy and interactive it is to browse and edit bugs #2 Launch a rich Web UI Take a look at the awesome web UI that is obtained with git bug webui. Source: github.com     Source: github.com   This web UI is entirely packed inside the same go binary and serve static content through a localhost http server. It connects to  backend through a GraphQL API. Take a look at the schema for more clarity. The additional features that are planned include media embedding import/export of github issue extendable data model to support arbitrary bug tracker inflatable raptor Every new release is expected to come with exciting new features, it is also coupled with a few minor constraints. You can check out some of the minor inconveniences as listed out on the github page. We can’t wait for the release to be in a fully working condition. But before that, if you need any additional information on how the git-bug works, head over to the github page. Snapchat source code leaked and posted to GitHub GitHub open sources its GitHub Load Balancer (GLB) Director Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?
Read more
  • 0
  • 0
  • 10980

article-image-intel-amd-laptop-chip-partnership
Abhishek Jha
09 Nov 2017
3 min read
Save for later

Frenemies: Intel and AMD partner on laptop chip to keep Nvidia at bay

Abhishek Jha
09 Nov 2017
3 min read
For decades, Intel and AMD have remained bitter archrivals. Today, they find themselves teaming up to thwart a common enemy – Nvidia. As Intel revealed its partnership with Advanced Micro Devices (AMD) over a next-generation notebook chip, it was the first time the two chip giants collaborated since the ‘80s. The proposed chip for thin and lightweight laptops combines an Intel processor and an AMD graphics unit for complex video gaming. The new series of processors will be part of Intel's 8th-generation Core H-series mobile chips, expected to hit the market in the first quarter of 2018. What it means is that Intel’s high-performance x86 cores will get combined with AMD Radeon Graphics into the same processor package using Intel’s EMIB multi-die technology. That is not all. Intel is also bundling the design with built-in High Bandwidth Memory (HBM2) RAM. The new processor, Intel claims, reduces the usual silicon footprint by about 50%. And with a ‘semi-custom’ graphics processor from AMD, enthusiasts can look forward to discrete graphics-level performances for playing games, editing photos or videos, and other tasks that can leverage modern GPU technologies. What does AMD get? Having struggled to remain profitable in recent times, AMD has been losing share in the discrete notebook GPU market. The deal could bring additional revenues with increased market share. Most importantly, the laptops built with the new processors won’t be competing with AMD’s Ryzen chips (which are also designed for ultrathin laptops). AMD clarified on the difference: While the new Intel chips are designed for serious gamers, Ryzen chips (that are due out at the end of the year) can run games but are not specifically designed for that purpose. "Our collaboration with Intel expands the installed base for AMD Radeon GPUs and brings to market a differentiated solution for high-performance graphics,” Scott Herkelman, vice president and general manager of AMD's Radeon Technologies Group, said. "Together we are offering gamers and content creators the opportunity to have a thinner-and-lighter PC capable of delivering discrete performance-tier graphics experiences in AAA games and content creation applications.” While more information will be available in future, the first machines with the new technology are expected to release in the first quarter of 2018. Nvidia's stock fell on the news. While both AMD and Intel saw their shares surging. A rivalry that began when AMD reverse-engineered the Intel 8080 microchip in 1975 could still be far from over, but in graphics, the two have been rather cordial. Despite hating each other since formation, both decided to pick each other as lesser evil over Nvidia. This is why the Intel AMD laptop chip partnership has a definite future. Currently centered around laptop solutions, this could even stretch to desktops, who knows!
Read more
  • 0
  • 0
  • 10973