Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Single Board Computers

22 Articles
article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 5407

article-image-a-libre-gpu-effort-based-on-risc-v-rust-llvm-and-vulkan-by-the-developer-of-an-earth-friendly-computer
Prasad Ramesh
02 Oct 2018
2 min read
Save for later

A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer

Prasad Ramesh
02 Oct 2018
2 min read
An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer. The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor. The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language. The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand. The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.” This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism". It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet. To know more about this project, visit the libre risc-v website. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 7157

article-image-arduino-now-has-a-command-line-interface-cli
Prasad Ramesh
27 Aug 2018
2 min read
Save for later

Arduino now has a command line interface (CLI)

Prasad Ramesh
27 Aug 2018
2 min read
Listening to the Arduino developer community, the Arduino team has released a command line interface (CLI) for it. The CLI is a single binary file that performs most of the features present in the IDE. There was a wide gap between using the IDE and being able to use CLI completely for everything in Arduino. The CLI will allow you to Install new libraries, create new projects, and compile projects directly from the command line. Developers will get an advantage to test their projects quickly. You can also create your own libraries and compile them directly, for your own or third-party codes. Installing project dependencies will be as easy as typing the following command: arduino-cli lib install "WiFi101” “WiFi101OTA” In addition, the CLI has a JSON interface added for easy parsing by other programs. There were many requests for makefiles integration and the support has been added for it. The Arduino CLI can run on both ARM and Intel (x86, x86_64) architectures which means it can be installed on a Raspberry Pi or on any server. Massimo Banzi, Arduino founder stated: “I think it is very exciting for Arduino, one single binary that does all the complicated things in the Arduino IDE.” The Arduino team looks forward to people seeing integrating this tool in various IDEs. In the blog post by the Arduino team they have mentioned, “Imagine having the Arduino IDE or Arduino Create Editor speaking directly to Arduino CLI – and you having full control of it. You will be able to compile on your machine or on our online servers, detect your board or create your own IDE on top of it!” CLI is a better alternative to PlatformIO and will work on all three major operating systems, Linux, Windows, and macOS. The code is open source but you will need a license for commercial use. Visit the GitHub repository to get started with Arduino CLI. How to assemble a DIY selfie drone with Arduino and ESP8266 How to build an Arduino based ‘follow me’ drone Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 3962

article-image-rigettis-128-qubit-chip-quantum-computer
Fatema Patrawala
16 Aug 2018
3 min read
Save for later

Rigetti plans to deploy 128 qubit chip Quantum computer

Fatema Patrawala
16 Aug 2018
3 min read
Rigetti computers are committed to building the world’s most powerful computers and they believe the true value of quantum will be unlocked by practical applications. Rigetti CEO Chad Rigetti, posted recently on Medium about their plans to deploy 128 qubit chip quantum computing system, challenging Google, IBM, and Intel for leadership in this emerging technology. They have planned to deploy this system in the next 12 months and shared their investment in resources at the application layer to encourage experimentation on quantum computers. Over the past year, Rigetti has built 8-qubit and 19-qubit superconducting quantum processors, which are accessible to users over the cloud through their open source software platform Forest. These chips have been useful in helping researchers around the globe to carry out and test programs on their quantum-classical hybrid computers. However, to drive practical use of quantum computing today, Rigetti must be able to scale and improve the performance of the chips and connect them to the electronics on which they run . To achieve this, the next phase of quantum computing will require more power at the hardware level to drive better results. Rigetti is in a unique position to solve this problem and build systems that scale. Chad Rigetti adds, “Our 128-qubit chip is developed on a new form factor that lends itself to rapid scaling. Because our in-house design, fab, software, and applications teams work closely together, we’re able to iterate and deploy new systems quickly. Our custom control electronics are designed specifically for hybrid quantum-classical computers, and we have begun integrating a 3D signaling architecture that will allow for truly scalable quantum chips. Over the next year, we’ll put these pieces together to bring more power to researchers and developers.” While they are focussed on building the 128 qubit chip, the Rigetti team is also looking at ways to enhance the application layer by pursuing quantum advantage in three areas; i.e. quantum simulation, optimization and machine learning. The team believes quantum advantage will be achieved by creating a solution that is faster, cheaper and of a better quality. They have posed an open question as to which industry will build the first commercially useful application to add tremendous value to researchers and businesses around the world. Read the full coverage on the Rigetti Medium post. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule!
Read more
  • 0
  • 0
  • 4935
Banner background image

article-image-nvidias-new-turing-architecture-worlds-first-ray-tracing-gpu
Fatema Patrawala
14 Aug 2018
4 min read
Save for later

Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”

Fatema Patrawala
14 Aug 2018
4 min read
The Siggraph 2018 Conference brought in the biggest announcements from Nvidia unveiling a new turing architecture and three new pro-oriented workstation graphics cards in its Quadro family. This is the greatest leap for Nvidia since the introduction of the CUDA GPU in 2006. The Turing architecture features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing to enable real-time ray tracing. The two engines along with more powerful compute for simulation and enhanced rasterization will usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experience, amazing new effects powered by neural networks and fluid interactivity on highly complex models. The company also unveiled its initial Turing-based products - the NVIDIA® Quadro® RTX™ 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs. They are expected to revolutionize the work of approximately 50 million designers and artists across multiple industries. At the Annual Siggraph conference, Jensen Huang, founder and CEO, Nvidia mentions, “Turing is NVIDIA’s most important innovation in computer graphics in more than a decade. Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry.” Here’s the list of Turing architecture features in detail. Real-Time Ray Tracing Accelerated by RT Cores The Turing architecture is armed with dedicated ray-tracing processors called RT Cores. It will accelerate the computation similar to light and sound travel in 3D environments at up to 10 GigaRays a second. Turing will accelerate real-time ray tracing operations by up to 25x than that of the previous Pascal generation. GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes. AI Accelerated by powerful Tensor Cores The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second. It will power AI-enhanced features for creating applications with new capabilities including DLAA (deep learning anti-aliasing). DLAA is a breakthrough in high-quality motion image generation for denoising, resolution scaling and video re-timing. These features are part of the NVIDIA NGX™ software development kit, a new deep learning-powered technology stack. It will enable developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks Faster Simulation and Rasterization with New Turing Streaming Multiprocessor A new streaming multiprocessor architecture is featured in the new Turing-based GPUs to add an integer execution unit, that will execute in parallel with the floating point datapath. A new unified cache architecture with double bandwidth of the previous generation is added too. As it is combined with new graphics technologies like variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. Developers will be able to take advantage of NVIDIA’s CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environment and special effects. The new Turing architecture has already received support from companies like Adobe, Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk. The new Quadro RTX is priced at $2,300 for a 16GB version and $6,300 for 24GB version. Double the memory to 48GB and Nvidia expects you to pay about $10,000 for the high-end card. For more information you may visit the Nvidia official blog page. IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial] Amazon Echo vs Google Home: Next-gen IoT war 5 DIY IoT projects you can build under $50
Read more
  • 0
  • 0
  • 3489

article-image-tensorflow-1-9-now-officially-supports-raspberry-pi-bringing-machine-learning-to-diy-enthusiasts
Savia Lobo
06 Aug 2018
2 min read
Save for later

Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts

Savia Lobo
06 Aug 2018
2 min read
The Raspberry Pi board developers can now make use of the latest TensorFlow 1.9 features to build their board projects. Most developers use Raspberry Pi for shaping their innovative DIY projects. The Pi also acts as a pathway to introduce people to programming with an added benefit of coding in Python. The main objective of blending TensorFlow with the Raspberry Pi board is to let people explore the capabilities of machine learning on cost-effective and flexible devices. Eben Upton, the founder of the Raspberry Pi project, says, “It is vital that a modern computing education covers both fundamentals and forward-looking topics. With this in mind, we’re very excited to be working with Google to bring TensorFlow machine learning to the Raspberry Pi platform. We’re looking forward to seeing what fun applications kids (of all ages) create with it.” By being able to use TensorFlow features, existing users, as well as new users, can try their hand on live machine learning projects. Here are few real-life examples of Tensorflow on Raspberry Pi: DonkeyCar platform DonkeyCar, a platform to build DIY Robocars, uses TensorFlow and the Raspberry Pi to create self-driving toy cars. Object Recognition Robot The Tensorflow framework is useful for recognizing objects. This robot uses a library, a camera, and a Raspberry Pi, using which one can detect up to 20,000 different objects. Waste sorting robot This robot is capable of sorting every piece of garbage with the same precision as a human. This robot is able to recognize at least four types of waste. To identify the category to which it belongs, the system uses TensorFlow and OpenCV. One can easily install Tensorflow from the pre-built binaries using Python pip package system from the pre-built binaries. One can also install it by simply running these commands on the Raspbian 9 (stretch) terminal: sudo apt install libatlas-base-dev pip3 install tensorflow Read more about this project on GitHub page 5 DIY IoT projects you can build under $50 Build your first Raspberry Pi project How to mine bitcoin with your Raspberry Pi
Read more
  • 0
  • 0
  • 4239
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-nvidia-tesla-v100-gpus-publicly-available-in-beta-on-google-compute-engine-and-kubernetes-engine
Savia Lobo
02 May 2018
3 min read
Save for later

Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine

Savia Lobo
02 May 2018
3 min read
Nvidia Tesla V100 GPUs are now publicly available in beta on Google Compute Engine and Kubernetes Engine. Also, Nvidia Tesla P100 GPUs are now generally available. Nvidia Tesla V100 GPU is almost equal to 100 CPUs. This gives customers more power to handle computationally demanding applications, like machine learning, analytics, and video processing. One can select as many as eight NVIDIA Tesla V100 GPUs, 96 vCPU and 624GB of system memory in a single VM, receiving up to 1 petaflop of mixed precision hardware acceleration performance. NVIDIA V100s are available immediately in the following regions: us-west1, us-central1 and europe-west4. Each V100 GPU is priced as low as $2.48 per hour for on-demand VMs and $1.24 per hour for Preemptible VMs. Making Nvidia Tesla V100 available on the compute engine is part of Google’s GPU expansion strategy. Similar to Google GPUs, the V100 is also billed by the second and Sustained Use Discounts apply. NVIDIA Tesla P100 GPU, on the other hand is a good fit if one wants a balance between price and performance. One can select up to four P100 GPUs, 96 vCPUs and 624GB of memory per virtual machine. The P100 is also now available in europe-west4 (Netherlands) in addition to us-west1, us-central1, us-east1, europe-west1 and asia-east1. * Maximum vCPU count and system memory limit on the instance might be smaller depending on the zone or the number of GPUs selected. ** GPU prices listed as hourly rate, per GPU attached to a VM that are billed by the second. Pricing for attaching GPUs to preemptible VMs is different from pricing for attaching GPUs to non-preemptible VMs. Prices listed are for US regions. Prices for other regions may be different. Additional Sustained Use Discounts of up to 30% apply to GPU on-demand usage only. Google Cloud makes managing GPU workloads easy for both VMs and containers by providing, Google Compute Engine where customers can use instance templates and managed instance groups to easily create and scale GPU infrastructure. NVIDIA V100s and other GPU offerings in Kubernetes Engine, where Cluster Autoscaler helps provide flexibility by automatically creating nodes with GPUs, and scaling them down to zero when they are no longer in use. Preemptible GPUs for both Compute Engine managed instance groups and Kubernetes Engine’s Autoscaler optimizes the costs while simplifying infrastructure operations. Read more about both the GPUs in detail on the Google Research Blog and benefits of each on Nvidia V100 and Nvidia P100 blog post. Google announce the largest overhaul of their Cloud Speech-to-Text Google’s kaniko – An open-source build tool for Docker Images in Kubernetes, without a root access How machine learning as a service is transforming cloud  
Read more
  • 0
  • 0
  • 3608