Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - IoT and Hardware

119 Articles
article-image-google-releases-two-new-hardware-products-coral-dev-board-and-a-usb-accelerator-built-around-its-edge-tpu-chip
Sugandha Lahoti
06 Mar 2019
2 min read
Save for later

Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip

Sugandha Lahoti
06 Mar 2019
2 min read
Google teased its new hardware products built around its Edge TPU at the Google Next conference last summer. Yesterday, it officially launched the Coral dev board, a Raspberry-Pi look-alike, which is designed to run machine learning algorithms ‘at the edge’, and a USB accelerator. Coral Development Board The “Coral Dev Board” has a 40-pin header that runs Linux on an i.MX8M with an Edge TPU chip for accelerating TensorFlow Lite. The board also features 8GB eMMC storage, 1GB LPDDR4 RAM, Wi-Fi and Bluetooth 4.1. It has USB 2.0/3.0 ports, 3.5mm audio jack, DSI display interface, MIPI-CSI camera interface, HDMI 2.0a connector, and two Digital PDM microphones. Source: Google Coral dev board can be used as a single-board computer when you need accelerated ML processing in a small form factor.  It can also be used as an evaluation kit for the SOM and for prototyping IoT devices and other embedded systems. This board is available for $149.00. Google has also announced a $25 MIPI-CSI 5-megapixel camera for the dev board. USB Accelerator The USB Accelerator is basically a plug-in USB 3.0 stick to add machine learning capabilities to the existing Linux machines. This 65 x 30 mm accelerator can connect to Linux-based systems via a USB Type-C port. It can also work with a Raspberry Pi board at USB 2.0 speeds. The accelerator is built around a 32-bit, 32MHz Cortex-M0+ chip with 16KB of flash and 2KB of RAM. Source: Google The USB Accelerator is available for $75. Developers can build Machine Learning models for both the devices in TensorFlow Lite. More information is available on Google’s Coral Beta website. Coming soon are the PCI-E Accelerator, for integrating the Edge TPU into legacy systems using a PCI-E interface. Also coming is a fully integrated System-on-Module with CPU, GPU, Edge TPU, Wifi, Bluetooth, and Secure Element in a 40mm x 40mm pluggable module. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha). Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25.
Read more
  • 0
  • 0
  • 16780

article-image-home-assistant-an-open-source-python-home-automation-hub-to-rule-all-things-smart
Prasad Ramesh
25 Aug 2018
2 min read
Save for later

Home Assistant: an open source Python home automation hub to rule all things smart

Prasad Ramesh
25 Aug 2018
2 min read
We have Amazon Alexa, Google Home and Phillips Hue for smart actions in your home. But they are individual and require different controls. What if all of your smart devices can work together with a master hub? That is Home Assistant. Home assistant is an automation platform that can run on Raspberry Pi. It acts as a central hub for connecting and automating all your smart devices. It supports services like IFTTT, Pushbullet, Google cast, and many others. Currently there are over a thousand components supported. It tracks the state of all the installed smart devices in your home. All the devices can be controlled from a single, mobile-friendly, interface. For security and privacy, all operations via Home Assistant are done locally, meaning no data is stored on the cloud. The Home assistant website advertises functions like having lights turn on upon sunset, dimming lights when you watch a movie on Chromecast. There is a virtual image called Hass.io which is an all in one solution and get started with Home Assistant. There is a guide is to install Hass.io on a Raspberry Pi. The requirements for running Home Assistant are: Raspberry Pi 3 Model B+ + Power Supply (at least 2.5A) A Class 10 or higher, Size 32 GB or bigger Micro SD card An SD Card reader Ethernet cable (optional, Hass.io can work with WiFi) For unattended configuration, optionally a USB-Stick Home assistant is a hub, it cannot control anything on its own. Think of it as a hub that passes instructions, a master device that communicates with other devices for home automation. Home assistant can’t do anything if there are no smart devices to work with. Since it is open source, there are dozens of contributions from tinkerers and DIY enthusiasts worldwide. You can check out the automation examples to know more and use them. The installation is very simple and there is a friendly UI to control your automation tasks. There is plenty of information at the Home Assistant website to get your started. They also have a GitHub repository. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Apple joins the Thread Group, signalling its Smart Home ambitions with HomeKit, Siri and other IoT products Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 1
  • 11205

article-image-raspberry-pi-launches-it-last-board-for-the-foreseeable-future-the-raspberry-pi-3-model-a-available-now-at-25
Prasad Ramesh
16 Nov 2018
2 min read
Save for later

Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25

Prasad Ramesh
16 Nov 2018
2 min read
Yesterday, Raspberry launched the Raspberry Pi 3 Model A+ board which is a smaller and cheaper version of the Raspberry Pi 3B+. In 2014, the first gen Raspberry Pi 1 Model B+ was followed by a lighter Model A+ with half the RAM and removed ports. This was able to fit into their Hardware Attached on Top (HAT). Until now there were no such small form factor boards for the Raspberry Pi 2 and 3. Size is cut down but not the features (most of) The Raspberry Pi 3 Model A+ retains most of the features and enhancements as the bigger board of this series. This includes a 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU, 512MB LPDDR2 SDRAM, and dual-band 802.11ac wireless LAN and Bluetooth 4.2/BLE. The enhancements retained are improved USB mass-storage booting and improved thermal management. The entire Raspberry Pi 3 Model A+ board is an FCC certified radio module. This will significantly reduce the cost in conformance testing Raspberry Pi–based products. What is shrunk is the price which is now down to $25 and the board size of 65x56mm, the size of a HAT. Source: Raspberry website Raspberry Pi 3 Model A+ will likely be the last product for now In March this year, Raspberry said that the 3+ platform is the final iteration of the “classic” Raspberry Pi boards. The next steps/released products will be out of necessity and not an evolution. This is because for an evolution to happen Raspberry will need a new core silicon, on a new process node, with new memory technology. So this new board, the 3A+ is about closing things; meaning we won’t see any more products in this line, in the foreseeable future. This board does answer one of their most frequent customer requests for ‘missing products’. And clears their pipeline to focus on building the next generation of Raspberry Pi boards. For more details visit the Raspberry Pi website. Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 8681
Banner background image

article-image-you-can-now-install-windows-10-on-a-raspberry-pi-3
Prasad Ramesh
14 Feb 2019
2 min read
Save for later

You can now install Windows 10 on a Raspberry Pi 3

Prasad Ramesh
14 Feb 2019
2 min read
The WoA Installer for Raspberry Pi 3 enables installing Windows 10 on the credit card size computer. The WoA Installer for Raspberry Pi 3 is made by the same members who brought Windows 10 ARM to the Lumia 950 and 950 XL. Where to start? To get started, you need Raspberry Pi 3 Model B or B+, a microSD card of at least class 1, and a Windows 10 ARM64 Image which you can get from GitHub. You also need a recent version of Windows 10 and .NET Framework 4.6.1. The WoA Installer is just a tool which helps you to deploy Windows 10 on the Raspberry Pi 3. WoA Installer needs the Core Package in order to run. You can find them listed on the GitHub page. Specification comparison Regarding specifications, the minimum requirements for Windows 10 is: Processor: 1 gigahertz (GHz) or faster processor or SoC. RAM: 1 gigabyte (GB) for 32-bit or 2 GB for 64-bit. Hard disk space: 16 GB for 32-bit OS 20 GB for 64-bit OS The Raspberry Pi 3B+ has specifications just good enough to run Windows 10: SoC: Broadcom BCM2837B0 quad-core A53 (ARMv8) 64-bit @ 1.4GHz RAM: 1GB LPDDR2 SDRAM While this sounds good, a Hacker news user points out: “Caution: To do this you need to run a rat's nest of a batch file that runs a bunch of different code obtained from the web. If you're going to try this, try on devices you don't care about. Or spend innumerable hours auditing code. Pass -- for now.” You can check out the GitHub page for more instructions. Raspberry Pi opens its first offline store in England Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25
Read more
  • 0
  • 0
  • 7925

article-image-6-powerful-microbots-developed-by-researchers-around-the-world
Prasad Ramesh
01 Sep 2018
4 min read
Save for later

6 powerful microbots developed by researchers around the world

Prasad Ramesh
01 Sep 2018
4 min read
When we hear  the word robot, we may think of large industry sized robots assembling cars or humanoid ones. However there are such tiny robots that you may not even be able to see with the naked eye. Such six microbots are covered in this article which are in early development stages. Harvard's Ambulatory Microrobot (HAMR): A robotic cockroach Source: Hardvard HAMR is a versatile, 1.8-inch-long robotic platform that resembles a cockroach. The HAMR itself weighs in under an ounce and can run, jump and carry small items about twice its own weight. It is fast and can move with the speed of almost 19 inches per second. HAMR has given the researchers a useful base idea from which they can build other ideas. For example, the HAMR-F, an enhanced version of HAMR doesn't have any restraining wires. It can move around independently, it's only slightly heavier (2.8g) and slower than the HAMR. It is powered by a micro 8mA lithium polymer battery. Scientists at Harvard's School of Engineering and Applied Sciences also added footpads recently that allows the microbot to swim on water surface, sink and walk under water. Robotic bees: RoboBees Source: Harvard Like the HAMR, the RoboBee by Harvard has improved over time, it can also fly and swim. Its first successful flight was in 2013 and in 2015 it was able to swim. More recently in 2016, it gained the ability to "perch" on surfaces using static electricity. This allows the RoboBee to save power for loner flights. The 80-milligram robot can take a swim, leap up from the water, and then land. The RoboBee can flap its wings at 220 to 300 hertz in air and 9 to 13 hertz in water. μRobotex: microbots from France Source: Sciencealert Scientists from the Femto-ST Institute in France have built the μRobotex platform. It is a new, extremely small microrobot system. This system has been able to build the smallest house in the world inside a vacuum chamber. The robot used an ion beam to cut a silica membrane to tiny pieces for assembly. The micro house is 0.015 mm high and 0.020 mm broad. In comparison, a grain of sand is anywhere from 0.05 mm to 2 mm in diameter. The completed house was kept on the tip of an optical fiber piece as shown in the image above. Salto: a one-legged jumper Source: Wired Saltatorial locomotion on terrain obstacles (Salto), developed at University of California, is a one-legged jumping robot that is 10.2 inches tall when fully extended. It weighs about 100 grams, and can jump up to 1 meter in air. Salto's skills show when it can do more than just a single jump. It can bounce off walls and can perform several jumps in a row while avoiding obstacles. Salto was inspired by the galago, a small mammal expert at jumping. The idea of Salto was about robots that can leap over rubble, to provide emergency services. The newer model is the Salto-1P. Rolls Royce’s SWARM robots Source: Rolls Royce Rolls-Royce teamed up with scholars from the University of Nottingham and Harvard University to develop independent tiny mobile robots called SWARM. They are about 0.4 inches in diameter. They are a part of Rolls-Royce’s IntelligentEngine program. The SWARM robots are put into position by a robotic snake and use tiny cameras to capture parts of an engine which are hard to access otherwise. This is very useful for mechanics to figure out what is wrong with a car engine with greater accessibility. The future plan for SWARM is to perform inspections of aircraft engines in order to not remove from the airplanes. Short-Range Independent Microrobotic Platforms (SHRIMP) Source: DARPA The Defense Advanced Research Project Agency (DARPA) wants to develop insect-scaled robots with, "untethered mobility, maneuverability, and dexterity." In other words, they want microbots that can move around independently. DARPA is planning to sponsor these robots as part of the SHRIMP program for search and rescue, disaster relief, and hazardous environment inspection. It is also looking for robots that might work as prosthetics or eyes to see in places that are hard to reach. These microbots are in early development stages but on entering production they will be very resourceful. From medical assistance to guided inspection in small areas, these microbots will prove to be useful in a variety of areas. Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial] 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 1
  • 7862

article-image-google-to-kill-another-product-the-works-with-nest-api-in-the-wake-of-bringing-all-smart-home-products-under-google-nest
Bhagyashree R
09 May 2019
5 min read
Save for later

Google to kill another product, the 'Works with Nest' API in the wake of bringing all smart home products under "Google Nest"

Bhagyashree R
09 May 2019
5 min read
Update: Included Google’s recent plan of action after facing backlash by Nest users.   At this year’s Google I/O developer conference, Google announced that it is bringing all the Nest and Google Home products under one brand “Google Nest”. As a part of this effort, Nest announced on Tuesday that it will be discontinuing the Works with Nest API by August 30, 2019, in favor of Works with Google Assistant. “We want to unify our efforts around third-party connected home devices under a single developer platform – a one-stop shop for both our developers and our customers to build a more helpful home. To accomplish this, we’ll be winding down Works with Nest on August 31, 2019, and delivering a single unified experience through the Works with Google Assistant program,” wrote Nest in a post. Google with this change aims to make the whole smart home experience for users more secure and unified. Over the next few months, users with Nest accounts will need to migrate to Google Accounts, which will serve as a single front-end for using products across Nest and Google. Along with providing a unified experience, Google also promises to be transparent about the data it collects, which it mentioned in an extensive document published on Tuesday. The document titled “Google Nest commitment to privacy in the home” describes how its connected smart home devices work and also lays out Google’s approach for managing user data. Though Google is promising improved security and privacy with this change, this will also end up breaking some existing third-party integrations. And, one of them is IFTTT (If This, Then That), a software platform with which you can write “applets” that allow devices from different manufacturers to talk to each other. We can use IFTTT for things like automatically adjusting the thermostat when the user comes closer to their house based on their phone location, turning Philips Hue smart lights on when a Nest Cam security camera detects motion, and more. Developers who work with Works with Nest API are recommended to visit the Actions on Google Smart Home developer site to learn how to integrate smart home devices or services with the Google Assistant. What Nest users think about this decision? Though Google is known for its search engine and other online services, it is also known for abandoning and killing its products in a trice. This decision of phasing out Works with Nest has left many users infuriated who have brought Nest products. https://twitter.com/IFTTT/status/1125930219305615360 “The big problem here is that there are a lot of people that have spent a lot of money on buying quality hardware that isn't just for leisure, it's for protection. I'll cite my 4 Nest Protects and an outdoor camera as an example. If somehow they get "sunsetted" due to some Google whim, fad or Because They Can, then I'm going to be pretty p*ssed, to say the least. Based on past experience I don't trust Google to act in the users' interest,” said one Hacker News user. Some other users think that this change could be for better, but the timeline that Google has decided is pretty stringent. A Hacker News user commented on a discussion triggered by this news, “Reading thru it, it is not as brutal as it sounds, more than they merged it into the Google Assistant API, removing direct access permission to the NEST device (remember microphone-gate with NEST) and consolidating those permissions into Assistant. Whilst they are killing it off, they have a transition. However, as far as timelines go - August 2019 kill off date for the NEST API is brutal and not exactly the grace period users of connected devices/software will appreciate or in many cases with tech designed for non-technical people - know nothing until suddenly in August find what was working yesterday is now not working.” Google’s reaction to the feedback by Nest users As a response to the backlash by Nest users, Google published a blog post last week sharing its plan of action. According to this plan, users’ existing devices and integrations will continue to work with their Nest accounts. However, they will not have access to any new features that will be available through their Google account. Google further clarified that it will stop taking any new Works with Nest connection requests from August 31, 2019. “Once your WWN functionality is available on the WWGA platform you can migrate with minimal disruption from a Nest Account to a Google Account,” the blog post reads. Though Google did share its plans regarding the third-party integrations, it was pretty vague about the timelines. It wrote, “One of the most popular WWN features is to automatically trigger routines based on Home/Away status. Later this year, we'll bring that same functionality to the Google Assistant and provide more device options for you to choose from. For example, you’ll be able to have your smart light bulbs automatically turn off when you leave your home.” It further shared that it has teamed up with Amazon and other partners for bringing custom integrations to Google Nest. Read the official announcement on Nest’s website. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly What if buildings of the future could compute? European researchers make a proposal. Google to allegedly launch a new Smart home device
Read more
  • 0
  • 0
  • 7526
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-a-libre-gpu-effort-based-on-risc-v-rust-llvm-and-vulkan-by-the-developer-of-an-earth-friendly-computer
Prasad Ramesh
02 Oct 2018
2 min read
Save for later

A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer

Prasad Ramesh
02 Oct 2018
2 min read
An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer. The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor. The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language. The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand. The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.” This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism". It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet. To know more about this project, visit the libre risc-v website. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 7449

article-image-alibabas-chipmaker-launches-open-source-risc-v-based-xuantie-910-processor-for-5g-ai-iot-and-self-driving-applications
Vincy Davis
26 Jul 2019
4 min read
Save for later

Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications

Vincy Davis
26 Jul 2019
4 min read
Launched in 2018, Alibaba’s chip subsidiary, Pingtouge made a major announcement yesterday. Pingtouge is launching its first product - chip processor XuanTie 910 using the open-source RISC-V instruction set architecture. The XuanTie 910 processor is expected to reduce the costs of related chip production by more than 50%, reports Caixin Global. XuanTie 910, also known as T-Head, will soon be available in the market for commercial use. Pingtouge will also be releasing some of XuanTie 910’s codes on Github for free to help the global developer community to create innovative applications. No release dates have been revealed yet. What are the properties of the XuanTie 910 processor? The XuanTie 910 16-core processor has 7.1 Coremark/MHz and its main frequency can achieve 2.5GHz. This processor can be used to manufacture high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chip (SOC). It can be used in applications like 5G telecommunication, artificial intelligence (AI), and autonomous driving. XuanTie 910 processor gives 40% increased performance over the mainstream RISC-V instructions and also a 20% increase in terms of instructions. According to Synced, Xuantie 910 has two unconventional properties: It has a 2-stage pipelined out-of-order triple issue processor with two memory accesses per cycle. The processors computing, storage and multi-core capabilities are superior due to an increased extension of instructions. Xuantie 910 can extend more than 50 instructions than RISC-V. Last month, The Verge reported that an internal ARM memo has instructed its staff to stop working with Huawei. With the US blacklisting China’s telecom giant Huawei, and also banning any American company from doing business with them, it seems that ARM is also following the American strategy. Although ARM is based in U.K. and is owned by the Japanese SoftBank group, it does have an “US origin technology”, as claimed in the internal memo. This may be one of the reasons why Alibaba is increasing its efforts in developing RISC-V, so that Chinese tech companies can become independent from Western technologies. A Xuantie 910 processor can assure Chinese companies of a stable future, with no fear of it being banned by Western governments. Other than being cost-effective, RISC-V also has other advantages like more flexibility compared to ARM. With complex licence policies and high power prospect, it is going to be a challenge for ARM to compete against RISC-V and MIPS (Microprocessor without Interlocked Pipeline Stages) processors. A Hacker News user comments, “I feel like we (USA) are forcing China on a path that will make them more competitive long term.” Another user says, “China is going to be key here. It's not just a normal market - China may see this as essential to its ability to develop its technology. It's Made in China 2025 policy. That's taken on new urgency as the west has started cutting China off from western tech - so it may be normal companies wanting some insurance in case intel / arm cut them off (trade disputes etc) AND the govt itself wanting to product its industrial base from cutoff during trade disputes” Some users also feel that it is technology that wins when two big economies continue bringing up innovative technologies. A comment on Hacker News reads, “Good to see development from any country. Obviously they have enough reason to do it. Just consider sanctions. They also have to protect their own market. Anyone that can afford it, should do it. Ultimately it is a good thing from technology perspective.” Not all US tech companies are wary of partnering with Chinese counterparts. Two days ago, Salesforce, an American cloud-based software company announced a strategic partnership with Alibaba. This aims to help Salesforce localize their products in mainland China, Hong Kong, Macau, and Taiwan. This will enable Salesforce customers to market, sell, and operate through services like Alibaba Cloud and Tmall. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals The US Justice Department opens a broad antitrust review case against tech giants Salesforce is buying Tableau in a $15.7 billion all-stock deal
Read more
  • 0
  • 0
  • 7292

article-image-google-i-o-2019-flutter-ui-framework-now-extended-for-web-embedded-and-desktop
Sugandha Lahoti
08 May 2019
4 min read
Save for later

Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop

Sugandha Lahoti
08 May 2019
4 min read
At the ongoing 2019 Google I/O, Google made a major overhaul to its Flutter UI framework. Flutter is now expanded from mobile to multi-platform. The company released the first technical preview of Flutter for web. The core framework for mobile devices was also upgraded to Flutter 1.5. For desktop, Flutter is being used as an experimental project. It is not production-ready, but the team has published early instructions for developing  apps to run on Mac, Windows, and Linux. An embedding API for Flutter is also available that allows it to be used in scenarios for home and automotives. Google notes, “The core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.” Flutter for Web Flutter for web allows web-based applications to be built using the Flutter framework. Per Google, with Flutter for web you can create “highly interactive, graphically rich content,” though it plans to continue evolving this version with a “focus on performance and harmonizing the codebase.” It allows developers to compile existing Flutter code written in Dart into a client experience that can be embedded in the browser and deployed to any web server. Google teamed up with the New York Times to build a small puzzle game called Kenken as an early example of what can be built using Flutter for Web. This game uses the same code across Android, iOS, the web and Chrome OS. Source: Google Blog Flutter 1.5 Flutter 1.5 hosts a variety of new features including updates to its iOS and Material widget and engine support for new mobile devices types. The latest release also brings support for Dart 2.3 with extensive UI-as-code functionality. It also has an in-app payment library which will make monetizing Flutter based apps easier. Google also showcased an ML Kit Custom Image Classifier, built using Flutter and Firebase at Google I/O 2019. The kit offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app. Google has also released a comprehensive new training course for Flutter, built by The App Brewery. Their new course is available for a time-limited discount from $199 to just $10. Netizens had trouble acknowledging Google’s move and were left wondering as to whether Google wants people to invest in learning Dart or Kotlin. For reference, Flutter is entirely built in Dart and Google made two major announcements for Kotlin at the Google I/O. Android development will become increasingly Kotlin-first, and Google announcing the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers. A comment on Hacker News reads, “This is massively confusing. Do we invest in Kotlin ...or do we invest in Dart? Where will Android be in 2 years: Dart or Kotlin?” In response to this, another comment reads, “I don't think anyone has a definite answer, not even Google itself. Google placed several bets on different technologies and community will ultimately decide which of them is the winning one. Personally, I think native Android (Kotlin) and iOS (Swift) development is here to stay. I have tried many cross-platform frameworks and on any non-trivial mobile app, all of them cause more problem than they solve.” Another said, “If you want to do android development, Kotlin. If you want to do multi-platform development, flutter.” “Invest in Kotlin. Kotlin is useful for Android NOW. Whenever Dart starts becoming more mainstream, you'll know and have enough time to react to it”, was another user’s opinion. Read the entire conversation on Hacker News. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 You can now permanently delete your location history and web and app activity data on Google Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with a focus on AI and developer productivity
Read more
  • 0
  • 0
  • 7120

article-image-espressif-iot-devices-susceptible-to-wifi-vulnerabilities-can-allow-hijackers-to-crash-devices-connected-to-enterprise-networks
Savia Lobo
05 Sep 2019
4 min read
Save for later

Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks

Savia Lobo
05 Sep 2019
4 min read
Matheus Eduardo Garbelini a member of the ASSET (Automated Systems SEcuriTy) Research Group at the Singapore University of Technology and Design released a proof of concept for three WiFi vulnerabilities in the Espressif IoT devices, ESP32/ESP8266. 3 WiFi vulnerabilities on the ESP32/8266 IoT device Zero PMK Installation (CVE-2019-12587) This WiFi vulnerability hijacks clients on version ESP32 and ESP8266 connected to enterprise networks. It allows an attacker to take control of the WiFi device EAP session by sending an EAP-Fail message in the final step during the connection between the device and the access point. The researcher discovered that both the IoT devices update their Pairwise Master Key (PMK) only when they receive an EAP-Success message. If the EAP-Fail message is received before the EAP-Success, the device skips to update the PMK received during a normal EAP exchange (EAP-PEAP, EAP-TTLS or EAP-TLS). During this time, the device normally accepts the EAPoL 4-Way handshake. Each time ESP32/ESP8266 starts, the PMK is initialized as zero, thus, if an EAP-Fail message is sent before the EAP-Success, the device uses a zero PMK. Thus allowing the attacker to hijack the connection between the AP and the device. ESP32/ESP8266 EAP client crash (CVE-2019-12586) This WiFi vulnerability is found in SDKs of ESP32 and ESP8266 and allows an attacker to precisely cause a crash in any ESP32/ESP8266 connected to an enterprise network. In combination with the zero PMK Installation vulnerability, it could increase the damages to any unpatched device. This vulnerability allows attackers in radio range to trigger a crash to any ESP device connected to an enterprise network. Espressif has fixed such a problem and committed patches for ESP32 SDK, however, the SDK and Arduino board support for ESP8266 is still unpatched. ESP8266 Beacon Frame Crash (CVE-2019-12588) In this WiFi vulnerability, CVE-2019-12588 the client 802.11 MAC implementation in Espressif ESP8266 NONOS SDK 3.0 and earlier does not correctly validate the RSN AuthKey suite list count in beacon frames, probe responses, and association responses. This allows attackers in radio range to cause a denial of service (crash) via a crafted message. Two situations in a malformed beacon frame can trigger two problems: When sending crafted 802.11 frames with the field Auth Key Management Suite Count (AKM) in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. When sending crafted 802.11 frames with the field Pairwise Cipher Suite Count in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. “The attacker sends a malformed beacon or probe response to an ESP8266 which is already connected to an access point. However, it was found that ESP8266 can crash even when there’s no connection to an AP, that is even when ESP8266 is just scanning for the AP,” the researcher says. A user on Hacker News writes, “Due to cheap price ($2—$5 depending on the model) and very low barrier to entry technically, these devices are both very popular as well as very widespread in those two categories. These chips are the first hits for searches such as "Arduino wifi module", "breadboard wifi", "IoT wifi module", and many, many more as they're the downright easiest way to add wifi to something that doesn't have it out of the box. I'm not sure how applicable these attack vectors are in the real world, but they affect a very large number of devices for sure.” To know more about this news in detail, read the Proof of Concept on GitHub. Other interesting news in IoT security Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card
Read more
  • 0
  • 0
  • 6528
article-image-debian-gnu-linux-port-for-risc-v-64-bits-why-it-matters-and-roadmap
Amrata Joshi
20 Jun 2019
7 min read
Save for later

Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap

Amrata Joshi
20 Jun 2019
7 min read
Last month, Manuel A. Fernandez Montecelo, a Debian contributor and developer talked about the Debian GNU/Linux riscv64 port at the RISC-V workshop. Debian, a Unix-like operating system consists of free software supported by the Debian community that comprises of individuals who basically care about free and open-source software. The goal of the Debian GNU/Linux riscv64 port project has been to have Debian ready for installation and running on systems that implement a variant of the RISC-V (an open-source hardware instruction set architecture) based systems. The feedback from the people regarding his presentation at the workshop was positive. Earlier this week,  Manuel A. Fernandez Montecelo announced an update on the status of Debian GNU/Linux riscv64 port. The announcement comes weeks before the release of buster which will come with another set of changes to benefit the port. What is RISC-V used for and why is Debian interested in building this port? According to the Debian wiki page, “RISC-V (pronounced "risk-five") is an open source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles. In contrast to most ISAs, RISC-V is freely available for all types of use, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open ISA, it is significant because it is designed to be useful in modern computerized devices such as warehouse-scale cloud computers, high-end mobile phones and the smallest embedded systems. Such uses demand that the designers consider both performance and power efficiency. The instruction set also has a substantial body of supporting software, which fixes the usual weakness of new instruction sets. In this project the goal is to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA: Software-wise, this port will target the Linux kernel Hardware-wise, the port will target the 64-bit variant, little-endian This ISA variant is the "default flavour" recommended by the designers, and the one that seems to attract more interest for planned implementations that might become available in the next few years (development boards, possible consumer hardware or servers).” Update on Debian GNU/Linux riscv64 port Image source: Debian Let’s have a look at the graph where the percent of arch-dependent packages that are built for riscv64 (grey line) has been around or higher than 80% since mid-2018. The arch-dependent packages are almost half of Debian's [main, unstable] archive. It means that the arch-independent packages can be used by all the ports, provided that the software is present on which they rely on. The update also highlights that around 90% of packages from the whole archive has been made available for this architecture. Image source: Debian The graph above highlights that the percentages are very stable for all architectures. Montecelo writes, “This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems).” Even the second-class ports appear to be stable. Montecelo writes, “Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things just work.” According to him, apart from the work of porters themselves, there are people working on bootstrapping issues that make it easier to bring up ports, better than in the past. They also make coping better when toolchain support or other issues related to ports, blow up. He further added, “And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for the needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.” Future scope and improvements yet to come To get Debian running on RISC-V will not be easy because of various reasons including limited availability of hardware being able to run Debian port and limited options for using bootloaders. According to Montecelo, this is an area of improvement from them. He further added, “Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.” Presently, they are beyond 500 packages from the Rust ecosystem in the archive (which is about 4%) which can’t be built and used until Rust gets support for the architecture. Rust requires LLVM and there’s no Rust compiler based on GCC or other toolchains. Montecelo writes, “Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term." Apart from Rust, other packages use LLVM to some extent, but currently, it is not fully working for riscv64. The support of LLVM for riscv64 is expected to be completed this year. While talking about other programming languages, he writes, “There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is a long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.” Why are people excited about this? Many users seem to be excited about the news, one of the reasons being that there won’t be a need to bootstrap from scratch as Rust now will be able to cross-compile easily because of the Riscv64 support. A user commented on HackerNews, “Debian Rust maintainer here. We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.” Also, this appears to be a good news for Debian, as cross-compiling has really come a long way on Debian. Rest are awaiting for more to get incorporated with riscv. Another user commented, “I am waiting until the Bitmanip extension lands to get excited about RISC-V: https://github.com/riscv/riscv-bitmanip” Few others think that there is a need for LLVM support for riscv64. A user commented, “The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.” Another comment reads, “Basically LLVM is now a dependency of equal importance to GCC for Debian. Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.” According to users, the architecture of this port misses on two major points, one being the support for LLVM compiler and the other one being the support for Rust based on GCC. If the port gets the LLVM support by this year, users will be able to develop a front end for any programming language as well as a backend for any instruction set architecture. Now, if we consider the case of support for Rust based on GCC, then the port will help developers to get support for many language extensions as GCC provides the same. A user commented on Reddit, “The main blocker to finish the port is having a working Rust toolchain. This is blocked on LLVM support, which only supports RISCV32 right now, and RISCV64 LLVM support is expected to be finished during 2019.” Another comment reads, “It appears that enough people in academia are working on RISCV for LLVM to accept it as a mainstream backend, but I wish more stakeholders in LLVM would make them reconsider their policy.” To know more about this news, check out Debian’s official post. Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now? Are Debian and Docker slowly losing popularity?  
Read more
  • 0
  • 0
  • 6083

article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 5472

article-image-introducing-strato-pi-an-industrial-raspberry-pi
Prasad Ramesh
26 Nov 2018
4 min read
Save for later

Introducing Strato Pi: An industrial Raspberry Pi

Prasad Ramesh
26 Nov 2018
4 min read
Italian companies have designed Strato Pi, a Raspberry Pi based board intended to be used in industrial applications. It can be used in areas where a higher level of reliability is required. Source: sferlabs website Strato Pi features The board is roughly the same size of Regular Raspberry Pi 2/3 and is engineered to work in an industrial environment that demands more rugged devices. Power supply that can handle harsh environments The Strato Pi can accept a power supply from a wide range and can handle substantial amounts of ripple, noise and voltage fluctuations. The power supply circuit is heavily protected and filtered with oversized electrolytic capacitors, diodes, inductors, and a high efficiency voltage regulator. The power converter is based on PWN converted integrated circuits which can provide up to 95% power efficiency and up to 3A continuous current output. Over current limiting, over voltage protection and thermal shutdown are also built-in. The board is also protected against reverse polarity with resettable fuses. There is surge protection up to ±500V/2ohms 1.2/50μs which ensures reliability even in harsh environments. UPS to safeguard against power failure In database and data collection applications, supper power interruption may cause data loss. To tackle this Strato Pi has an integrated power supply that gives enough time to save data and shutdown when there is a power failure. The battery power supply stage of the board supplies power to the Strato Pi circuits without any interruption even when the main power supply fails. This stage also charges the battery via a high efficiency step-up converter to generate the optimal charging voltage independent of the main power supply voltage value. Built-in real time clock The Strato Pi has a built-in battery-backed real time clock/calendar. It is directly connected to the Raspberry Pi via the I2C bus interface. This shows the correct time even when there is no internet connection. This real time clock is based on the MCP79410 general purpose Microchip RTCC chip. A replaceable CR1025 battery acts as backup power source when the main power is not available. In always powered on state, the battery can last over 10 years. Serial Port Strato Pi uses the interface circuits of the RS-232 and RS-485 serial ports. They are insulated from the main and battery power supply voltages which avoids failures due to ground loops. A proprietary algorithm powered micro-controller, automatically manages the data direction of RS-485. Without any special configuration, the baud rate and the number of bits are taken into account. Thus, the Raspberry board can communicate through its TX/RX lines without any other additional signal. Can Bus The Controller Area Network (CAN) bus is widely used and is based on a multi-master architecture. This board implements an easy to use CAN bus controller. It has both RS-485 and CAN bus ports which can be used at the same time. CAN specification version 2.0B can be used and support of up to 1 Mbps is available. A hardware watchdog A hardware watchdog is an electronic circuit that can automatically reset the processor if there is a software hang. This is implemented with the help of the on board microcontroller. This is independent of  the Raspberry Pi’s internal CPU watchdog circuit. The base variant starts at roughly $88. They also have a mini and products like a prebuilt server. For more details on Strato Pi, sferlabs website. Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi [Tutorial]
Read more
  • 0
  • 0
  • 5203
article-image-california-passes-the-u-s-first-iot-security-bill
Prasad Ramesh
25 Sep 2018
3 min read
Save for later

California passes the U.S.' first IoT security bill

Prasad Ramesh
25 Sep 2018
3 min read
California likes to be leading the way when it comes to digital regulation. Just a few weeks ago it passed legislation that looks like it could restore net neutrality. Now, a bill designed to tighten IoT security, is with the governor awaiting signature for it to be carried into California state law. The bill, SB-327 Information privacy: connected devices, was initially introduced in February 2017 by Senator Jackson. It was the first legislation of its kind in the US. Approved at the end of August, it will come into effect at the start of 2020 once signed by Governor Jerry Brown. Read next: IoT Forensics: Security in an always connected world where things talk What California’s IoT bill states The new IoT security bill covers another of important areas. For example, for manufacturers, IoT devices will need to contain certain safety and security features: Security should be appropriate to the nature and function of the device. The feature should be appropriate to the information an IoT may collect, contain, or transmit. It should be designed to protect the device and information within it from unauthorized access, destruction, use, modification, or disclosure. If an IoT device is requires authentication over the internet, further conditions need to be met, such as: The preset password must be unique to each device that is manufactured. The device must ask the user to generate a new authentication method before being able to use it for the first time. It’s worth noting that the points mentioned above for IoT security are not applicable to IoT devices that are subject to security requirements under federal law. Also a covered entity like a health care provider, business associate, contractor, or employer subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) or the Confidentiality of Medical Information Act is exempt from the title points mentioned. The IoT is a network of several of devices that connect to the internet via Wi-Fi. They are not openly visible as most of them are used in a local network but often do not have many security measures. The bill doesn't have any exact definitions for a ‘reasonable security feature’ but provides a few guiding points in interest a user’s security. The legislation states: “This bill, beginning on January 1, 2020, would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure, as specified.” Criticisms of the IoT bill Some cybersecurity experts have criticised the legislation. For example, Robert Graham writes on his Security Errarta blog that the bill is “based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.” He explains that “the point [of good cybersecurity practice] is not to add ‘security features’ but to remove ‘insecure features’.” Graham’s criticisms underline that while the legislation might be well-intentioned, whether it will be impactful remains another matter. This is, at the very least, a step in the right direction by a state that is keen to take digital security and freedom into its own hands. You can read the bill at the California Legislative information website. How Blockchain can level up IoT Security Defending your business from the next wave of cyberwar: IoT Threats
Read more
  • 0
  • 0
  • 5190

article-image-rigettis-128-qubit-chip-quantum-computer
Fatema Patrawala
16 Aug 2018
3 min read
Save for later

Rigetti plans to deploy 128 qubit chip Quantum computer

Fatema Patrawala
16 Aug 2018
3 min read
Rigetti computers are committed to building the world’s most powerful computers and they believe the true value of quantum will be unlocked by practical applications. Rigetti CEO Chad Rigetti, posted recently on Medium about their plans to deploy 128 qubit chip quantum computing system, challenging Google, IBM, and Intel for leadership in this emerging technology. They have planned to deploy this system in the next 12 months and shared their investment in resources at the application layer to encourage experimentation on quantum computers. Over the past year, Rigetti has built 8-qubit and 19-qubit superconducting quantum processors, which are accessible to users over the cloud through their open source software platform Forest. These chips have been useful in helping researchers around the globe to carry out and test programs on their quantum-classical hybrid computers. However, to drive practical use of quantum computing today, Rigetti must be able to scale and improve the performance of the chips and connect them to the electronics on which they run . To achieve this, the next phase of quantum computing will require more power at the hardware level to drive better results. Rigetti is in a unique position to solve this problem and build systems that scale. Chad Rigetti adds, “Our 128-qubit chip is developed on a new form factor that lends itself to rapid scaling. Because our in-house design, fab, software, and applications teams work closely together, we’re able to iterate and deploy new systems quickly. Our custom control electronics are designed specifically for hybrid quantum-classical computers, and we have begun integrating a 3D signaling architecture that will allow for truly scalable quantum chips. Over the next year, we’ll put these pieces together to bring more power to researchers and developers.” While they are focussed on building the 128 qubit chip, the Rigetti team is also looking at ways to enhance the application layer by pursuing quantum advantage in three areas; i.e. quantum simulation, optimization and machine learning. The team believes quantum advantage will be achieved by creating a solution that is faster, cheaper and of a better quality. They have posed an open question as to which industry will build the first commercially useful application to add tremendous value to researchers and businesses around the world. Read the full coverage on the Rigetti Medium post. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule!
Read more
  • 0
  • 0
  • 5148