Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - IoT and Hardware

152 Articles
article-image-made-by-google-2019-hardware-event-pixel-4-date-of-google-stadia
Sandesh Deshpande
17 Oct 2019
5 min read
Save for later

Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia

Sandesh Deshpande
17 Oct 2019
5 min read
After Apple, Microsoft and OnePlus tech giant Google is the next to showcase its new range of products in techtober. At its annual hardware event 'Made by Google' in New York City, Google launched a variety of new gears. Though the focus of this event was on Google's flagship phone Pixel 4 and Pixel 4 XL, the tech giant also announced the launch date of its highly anticipated gaming subscription service Google Stadia. The theme of the event was Ambient computing and how it is combining with Artificial Intelligence, wireless tech, and cloud computing. The theme was quite evident from all the products Google showcased at the event. “Our vision for ambient computing is to create a single, consistent experience at home, at work or on the go, whenever you need it,” said Rick Osterloh, Google’s head of hardware. “Your devices and services work together,” Osterloh said. “And it’s fluid so it disappears into the background.” Let’s look at the various products announced at ‘Made by Google’. Google Stadia to release on November 19 The event kicked off with release date (November 19) of Google's revolutionary Game Streaming service 'Stadia'. With Google Stadia you can play games from your desktop, Google’s Chrome web browser, smartphones, smart televisions, and tablets, or Chromecast.  Stadia is a cloud-based platform that streams games directly instead of rendering it on a console or a powerful local PC. Google also shared the inspiration behind the design of Stadia Controller. https://youtube.com/watch?v=Pwb6d2wK3Qw Pixel 4 and Pixel 4 XL now available After much anticipation, Google finally launched the next phone of its flagship Pixel series in two variants - Pixel 4 and Pixel 4 XL. The Pixel 4 has 5.7″ screen with a 2,800mAh battery, while the Pixel 4 XL comes in at 6.3″ screen with a 3,700mAh battery. They’re both running on the Snapdragon 855 chipset with 6GB of RAM. Both phones come with dual rear cameras and display comes with 90 Hz refresh rate. They also have “Project Soli radar” powering face unlock and gesture recognition allowing you to switch songs, snooze alarms or silence calls with just a wave of your hand. Pixel 4 also comes with crash detection (the US only for now), which can recognize if you’ve been in an automobile accident and can automatically call 911 for you. https://youtube.com/watch?v=_F7YRde8DuE Pixel 4 also has an advance recording app that can both record and transcribe audio at the same time.  It can also identify specific words and sounds allowing you to search specific parts of a recording with an inbuilt search function. This search functionality processing happens on your local device. One cannot talk about Pixel phones without mentioning camera features. Both Pixel 4 and Pixel 4 XL come with 3 cameras with 12.2-megapixel f/1.7 main, 16-megapixel f/2.4 telephoto rear and 8-megapixel selfie cameras. With respect to videos, these cameras can record 4K at 30 fps, and 1080p at 120 fps. The Google Pixel 4 will release on October 24 but is available for pre-order today. The 64GB version of the Pixel 4 will start from $799, with a 128GB option available for $899. The 64GB Pixel 4 XL will start from $899, with the option for 128GB for $999. New wireless headphones: Pixel Buds 2 Google also announced its next-generation wireless headphones – Pixel Buds 2. Now you can have direct access to Google assistant with the 'Hey Google' command, Pixel buds also support long range Bluetooth connectivity (According to Google, Pixel buds will remain connected to your phones in the range of three rooms or a Football field's length). Pixel buds will be available in Spring 2020 for around $179. https://youtube.com/watch?v=2MmAbDJK8YY New Chrome OS laptop: Pixelbook Go Google has refreshed its Chromebook series and launched a new product, the Pixelbook Go. “We wanted to create a thin and light laptop that was really fast, and also have it last all day. And of course, we wanted it to look and feel beautiful”, said Google's Ivy Ross, VP head of design hardware, in the event. Weighing only two pounds and 13 mm thin, Pixelbook Go is portable while also being loaded with 16GB of RAM and up to 256GB of storage. The company has promised around 12 hours of battery life. The base model starts at $649. Source: Google Blog Google home mini is now 'Nest Mini' Google’s Home Mini assistant smart speaker is now renamed as 'Nest Mini'. It is now made of recycled plastic bottles, is wall-mountable and consists of a machine learning chip for faster response time.  The smart speaker also has additional microphones suitable for louder environments. Nest Mini will be available from October 22 for $49. Source: Google Blog Google also launched Nest WiFi which is a combination of Router/Smart speaker which is faster and features 25% better coverage compared to its predecessor ‘Google WiFi’. The routers come in 2-pack for $269 or 3-pack for $349 and go on sale on November 4. You can watch the event on YouTube. Bazel 1.0, Google’s polyglot build system switches to semantic versioning for better stability Google Project Zero discloses a zero-day Android exploit in Pixel, Huawei, Xiaomi and Samsung devices Google Chrome Keystone update can render your Mac system unbootable
Read more
  • 0
  • 0
  • 2622

article-image-whats-new-in-usb4-transfer-speeds-of-upto-40gb-second-with-thunderbolt-3-and-more
Sugandha Lahoti
04 Sep 2019
3 min read
Save for later

What’s new in USB4? Transfer speeds of upto 40GB/second with Thunderbolt 3 and more

Sugandha Lahoti
04 Sep 2019
3 min read
USB4 technical specifications were published yesterday. Along with removing space in stylization (USB4 instead of USB 4), the new version offers double the speed of it’s previous versions. USB4 architecture is based on Intel’s Thunderbolt; Intel had provided Thunderbolt 3 to the USB Promoter Group royalty-free earlier this year. Features of USB4 USB4 runs blazingly fast by incorporating Intel's Thunderbolt technology. It allows transfers at the rate of 40 gigabits per second, twice the speed of the latest version of USB 3 and 8 times the speed of the original USB 3 standard. 40Gbps speeds, can example, allow users to do things like connect two 4K monitors at once, or run high-end external GPUs with ease. Key characteristics as specified in the USB4 specification include: Two-lane operation using existing USB Type-C cables and up to 40 Gbps operation over 40 Gbps certified cables Multiple data and display protocols to efficiently share the total available bandwidth over the bus Backward compatibility with USB 3.2, USB 2.0 and Thunderbolt 3 Another good news is that USB4 will use the same USB-C connector design as USB 3, which means manufacturers will not need to introduce new USB4 ports into their devices. Why USB4 omits a space The change in stylization was to simplify things. In an interview with Tom’s Hardware, USB Promoter Group CEO Brad Saunders said this is to prevent the profusion of products sporting version number badges that could confuse consumers. “We don’t plan to get into a 4.0, 4.1, 4.2 kind of iterative path,” he explained. “We want to keep it as simple as possible. When and if it goes faster, we’ll simply have the faster version of the certification and the brand.” Is Thunderbolt 3 compatibility optional? The specification mentioned that using Thunderbolt 3 support is optional. The published spec states: It’s up to USB4 device makers to support Thunderbolt.  This was a major topic of discussion among people on Social media. https://twitter.com/KevinLozandier/status/1169106844289077248 The USB Implementers Forum released a detailed statement clarifying the issue. "Regarding USB4 specification’s optional support for Thunderbolt 3, USB-IF anticipates PC vendors to broadly support Thunderbolt 3 compatibility in their USB4 solutions given Thunderbolt 3 compatibility is now included in the USB4 specification and therefore royalty free for formal adopters," the USB-IF said in a statement. "That said, Intel still maintains the Thunderbolt 3 branding/certification so consumers can look for the appropriate Thunderbolt 3 logo and brand name to ensure the USB4 product in question has the expected Thunderbolt 3 compatibility. Furthermore, the decision was made not to make Thunderbolt 3 compatibility a USB4 specification requirement as certain manufacturers (e.g. smartphone makers) likely won’t need to add the extra capabilities that come with Thunderbolt 3 compatibility when designing their USB4 products," the statement added. Though the specification is released, it will be some time before USB4 compatible devices hit the market. We can expect to see devices that take advantage of the new version late 2020 or beyond. Read more in Hardware USB-IF launches ‘Type-C Authentication Program’ for better security Apple USB Restricted Mode: Here’s Everything You Need to Know USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps
Read more
  • 0
  • 0
  • 4352

article-image-raspberry-pi-4-is-up-for-sale-at-35-with-64-bit-arm-core-up-to-4gb-memory-full-throughput-gigabit-ethernet-and-more
Vincy Davis
24 Jun 2019
5 min read
Save for later

Raspberry Pi 4 is up for sale at $35, with 64-bit ARM core, up to 4GB memory, full-throughput gigabit Ethernet and more!

Vincy Davis
24 Jun 2019
5 min read
Today, the Raspberry Pi 4 model is up for sale, starting at $35. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, Dual-band 802.11ac wireless networking, two USB 3.0 and two USB 2.0 ports, a complete compatibility with earlier Raspberry Pi products and more. Eben Upton, Chief Executive at Raspberry Pi Trading has said that “This is a comprehensive upgrade, touching almost every element of the platform.” This is the first Raspberry Pi product available offline, since the opening of their store in Cambridge, UK.   https://youtu.be/sajBySPeYH0   What’s new in Raspberry Pi 4? New Raspberry Pi silicon Previous Raspberry Pi models are based on 40nm silicon. However, the new Raspberry Pi 4 is a complete re-implementation of BCM283X on 28nm. The power saving delivered by the smaller process geometry has enabled the use of Cortex-A72 core, which has a 1.5GHz quad-core 64-bit ARM. The Cortex-A72 core can execute more instructions per clock, yielding four times performance improvement, over Raspberry Pi 3B+, depending on the benchmark. New Raspbian software The new Raspbian software provides numerous technical improvements, along with an extensively modernized user interface, and updated applications including the Chromium 74 web browser. For Raspberry Pi 4, the Raspberry team has retired the legacy graphics driver stack used on previous models and opted for the Mesa “V3D” driver. It offers benefits like OpenGL-accelerated web browsing and desktop composition, and also eliminates roughly half of the lines of closed-source code in the platform. Raspberry Pi 4 memory options For the first time, Raspberry Pi 4 is offering a choice of memory capacities, as shown below: All three variants of the new Raspberry Pi model have been launched. The entry-level Raspberry Pi 4 Model B is priced at 35$, excluding sales tax, import duty, and shipping. Additional improvements in Raspberry Pi 4 Power Raspberry Pi 4 has USB-C as the power connector, which will support an extra 500mA of current, ensuring 1.2A for downstream USB devices, even under heavy CPU load. Video The previous type-A HDMI connector has been replaced with a pair of type-D HDMI connectors, so as to accommodate dual display output within the existing board footprint. Ethernet and USB The Gigabit Ethernet magjack has been moved to the top right of the board, hence simplifying the PCB routing. The 4-pin Power-over-Ethernet (PoE) connector is in the same location, thus Raspberry Pi 4 remains compatible with the PoE HAT. The Ethernet controller on the main SoC is connected to an external Broadcom PHY, thus providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports. The Raspberry Pi 4 model has the LPDDR4 memory technology, with triple bandwidth. It has also upgraded the video decode, 3D graphics, and display output to support 4Kp60 throughput. Onboard Gigabit Ethernet and PCI Express controllers have been added to address the non-multimedia I/O limitations of the previous devices. Image Source: Raspberry Pi blog New Raspberry Pi 4 accessories Due to the connector and form-factor changes, Raspberry Pi 4 has the requirement of new accessories. The Raspberry Pi 4 has its own case, priced at $5. It also has developed a suitable 5V/3A power supply, which is priced at $8 and is available in the UK, European, North American and Australian plug formats. The Raspberry Pi 4 Desktop Kit is also available and priced at $120. While the earlier Raspberry Pi models will be available in the market, Upton has mentioned that Raspberry Pi will continue to build these models as long as there's a demand for them. Users are quite ecstatic with the availability of Raspberry Pi 4 and many have already placed orders for it. https://twitter.com/Morphy99/status/1143103131821252609 https://twitter.com/M0VGA/status/1143064771446677509 A user on Reddit comments, “Very nice. Gigabit LAN and 4GB memory is opening it up to a hell of a lot more use cases. I've been tempted by some of the Pi's higher-specced competitors like the Pine64, but didn't want to lose out on the huge community behind the Pi. This seems like the best of both worlds to me.” A user on Hacker News says that “Oh my! This is such a crazy upgrade. I've been using the RPI2 as my HTPC/NAS at my folks, and I'm so happy with it. I was itching to get the last one for myself. USB 3.0! Gigabit Ethernet! WiFi 802.11ac, BT 5.0, 4GB RAM! 4K! $55 at most?! What the!? How the??! I know I'm not maintaining decorum at Hacker News, but I am SO mighty, MIGHTY excited! I'm setting up a VPN to hook this (when I get it) to my VPS and then do a LOT of fun stuff back and forth, remotely, and with the other RPI at my folks.” Another comment reads “This is absolutely great. The RPi was already exceptional for its price point, and this version seems to address the few problems it had (lack of Gigabit, USB speed and RAM capacity) and add onto it even more features. It almost seems too good to be true. Can't wait!” Another user says that “I'm most excited about the modern A72 cores, upgraded hardware decode, and up to 4 GB RAM. They really listened and delivered what most people wanted in a next gen RPi.” For more details, head over to the Raspberry Pi official blog. You can now install Windows 10 on a Raspberry Pi 3 Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial] Introducing Strato Pi: An industrial Raspberry Pi
Read more
  • 0
  • 0
  • 3222

article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 3172

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5379

article-image-ces-2019-is-bullshit-we-dont-need-after-2018s-techlash
Richard Gall
08 Jan 2019
6 min read
Save for later

CES 2019 is bullshit we don't need after 2018's techlash

Richard Gall
08 Jan 2019
6 min read
The asinine charade that is CES is running in Las Vegas this week. Describing itself as 'the global stage of innovation', CES attempts to set the agenda for a new year in tech. While ostensibly it's an opportunity to see how technology might impact the lives of all of us over the next decade (or more), it is, in truth, a vapid carnival that does nothing but make the technology industry look stupid. Okay, perhaps I'm being a fun sponge: what's wrong with smart doorbells, internet connected planks of wood and other madcap ideas? Well, nothing really - but those inventions are only the tip of the iceberg. Disagree? Don't worry: you can find the biggest announcements from day one of CES 2019 here. What CES gets wrong Where CES really gets it wrong - and where it drives down a dead end of vacuity - is how it showcases the mind numbing rush to productize and then commercialize some of the really serious developments that could transform the world in a way that is ultimately far less trivial than the glitz and glamor of the way it is presented in the media would suggest. This isn't to say that there there won't be important news and interesting discussions to come out of CES. But even the more interesting topics can be diluted, becoming buzzwords for marketers to latch onto. As Wired remarks on Twitter, "the term AI-powered is used loosely and is almost always a marketing ploy, whether or not a product is impacted by AI." In the same thread, the publication's account also notes that 5G, another big theme for the event, won't be widely available for at least another 12 months. https://twitter.com/WIRED/status/1082294957979910144 Ultimately, what this tells us is that the focus of CES isn't really technology - not in the sense of how we build it and how we should use it. Instead, it is an event dedicated to the ways we can sell it. Perhaps in previous years, the gleeful excitement of CES was nothing but a bit of light as we recover from the holiday period. But this year it's different. 2018 was a year of reckoning in tech, as a range of scandals emerged that underlined the ways in which exciting technological innovation can be misused and deployed against the very people we assume it should be helping. From the Cambridge Analytica scandal to the controversy surrounding Amazon's Rekognition, Google's Project Dragonfly, and Microsoft's relationship with ICE, 2018 was a year that made it clearer than ever that buried somewhere beneath novel and amusing inventions, and better quality television screens are a set of interests that have little interest in making life better for people. The corporate glamor of CES 2019 is just kitsch It's not news that there are certain organisations and institutions that don't have the interests of the majority at heart. But CES 2019 does take on a new complexion in the shadow of all that has happened in 2019. The question 'what's the point of all this' takes on a more serious edge. When you add in the dissent that has come from a growing part of the Silicon Valley workforce, CES 2019 starts to look like an event that, much like many industry leaders, wants to bury the messy and complex reality of building software in favor of marketing buzz. In The Unbearable Lightness of Being, the author Milan Kundera describes kitsch as "the absolute denial of shit." It's following this definition that you can see CES as a kitsch event. This is because the it pushes the decisions and inevitable trade offs that go into developing new technologies and products into the shadows. It doesn't take negative consequences seriously. It's all just 'shit' that should be ignored. This all adds up to a message that seems to be: better doesn't even need to be built. It's here already, no risks, no challenges. Developers don't really feature at CES. That's not necessarily a problem - after all, it's not an event for them, and what developer wants to spend time hearing marketers talk about AI? But if 2018 has taught us anything, it's that a culture of commercialization that refuses to consider consequences other than what can be done in the service of business growth can be immensely damaging. It hurts people, and it might even be hurting democracy. Okay, the way to correct things probably isn't to simply invite more engineers to CES. But by the same token, CES is hardly helping things either. Everything important is happening outside the event Everything important seems to be happening at the periphery of this year's CES, in some instances quite literally outside the building. Apple's ad, for example, might have been a clever piece of branding, but it has captured the attention of the world. Arguably, it's more memorable than much of what's happening inside the event. And although it's possible to be cynical, it does nevertheless raise important questions about a number of companies attitudes to user data. https://twitter.com/NateIngraham/status/1081612316532064257 Another big talking point as this year's event began is who isn't present. Due to the government shutdown a number of officials that were due to attend and speak have had to cancel. This acts as a reminder of the wider context in which CES 2019 is taking place, in which a nativist government looks set on controlling controlling who and how people move across borders. It also highlights how euphemistic the phrase 'consumer technology' really is. TVs and cloud connected toilets might take the headlines, but its government surveillance that will likely have the biggest impact on our lives in the future. Not that any of this seemed to matter to Gary Shapiro, the Chief Executive of the Consumer Technology Association (the organization that puts on CES). Speaking to the BBC, Shapiro said: “It’s embarrassing to be on the world stage with a dominant event in the world of technology, and our federal government... can't be there to host their colleague government executives from around the world.” Shapiro's frustration is understandable from an organizer's perspective. But it also betrays the apparent ethos of CES: what's happening outside doesn't matter. We all deserve better than CES 2019 The new products on show at CES 2019 won't make everything better. There's a chance they will make everything worse. Arguably, the more blindly optimistic we are that they'll make things better, the more likely they are to make things worse. It's only by thinking through complex questions, and taking time to consider the possible consequences of our decision making as developers, product managers, or business people that we can actually be sure that things will get better. This doesn't mean we need to stop getting excited about new inventions and innovations. But things like smart cities and driverless cars pose a whole range of issues that shouldn't be buried in the optimistic schmaltz of events like CES. They need care and attention from policy makers, designers, software engineers, and many others to ensure they are actually going to help to build a better world for people.
Read more
  • 0
  • 0
  • 2442
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
article-image-4-things-in-tech-that-might-die-in-2019
Richard Gall
19 Dec 2018
10 min read
Save for later

4 things in tech that might die in 2019

Richard Gall
19 Dec 2018
10 min read
If you’re in and around the tech industry, you’ve probably noticed that hype is an everyday reality. People spend a lot of time talking about what trends and technologies are up and coming and what people need to be aware of - they just love it. Perhaps second only to the fashion industry, the tech world moves through ideas quickly, with innovation piling up upon the next innovation. For the most part, our focus is optimistic: what is important? What’s actually going to shape the future? But with so much change there are plenty of things that disappear completely or simply shift out of view. Some of these things may have barely made an impression, others may have been important but are beginning to be replaced with other, more powerful, transformative and relevant tools. So, in the spirit of pessimism, here is a list of some of the trends and tools that might disappear from view in 2019. Some of these have already begun to sink, while others might leave you pondering whether I’ve completely lost my marbles. Of course, I am willing to be proven wrong. While I will not be eating my hat or any other item of clothing, I will nevertheless accept defeat with good grace in 12 months time. Blockchain Let’s begin with a surprise. You probably expected Blockchain to be hyped for 2019, but no, 2019 might, in fact, be the year that Blockchain dies. Let’s consider where we are right now: Blockchain, in itself, is a good idea. But so far all we’ve really had our various cryptocurrencies looking ever so slightly like pyramid schemes. Any further applications of Blockchain have, by and large, eluded the tech world. In fact, it’s become a useful sticker for organizations looking to raise funds - there are examples of apps out there that support Blockchain backed technologies in the early stages of funding which are later dropped as the company gains support. And it’s important to note that the word Blockchain doesn’t actually refer to one thing - there are many competing definitions as this article on The Verge explains so well. At risk of sounding flippant, Blockchain is ultimately a decentralized database. The reason it’s so popular is precisely because there is a demand for a database that is both scalable and available to a variety of parties - a database that isn’t surrounded by the implicit bureaucracy and politics that even the most prosaic ones do. From this perspective, it feels likely that 2019 will be a search for better ways of managing data - whether that includes Blockchain in its various forms remains to be seen. What you should learn instead of Blockchain A trend that some have seen as being related to Blockchain is edge computing. Essentially, this is all about decentralized data processing at the ‘edge’ of a network, as opposed to within a centralized data center (say, for example, cloud). Understanding the value of edge computing could allow us to better realise what Blockchain promises. Learn edge computing with Azure IoT Development Cookbook. It’s also worth digging deeper into databases - understanding how we can make these more scalable, reliable, and available, are essentially the tasks that anyone pursuing Blockchain is trying to achieve. So, instead of worrying about a buzzword, go back to what really matters. Get to grips with new databases. Learn with Seven NoSQL Databases in a Week Why I could be wrong about Blockchain There’s a lot of support for Blockchain across the industry, so it might well be churlish to dismiss it at this stage. Blockchain certainly does offer a completely new way of doing things, and there are potentially thousands of use cases. If you want to learn Blockchain, check out these titles: Mastering Blockchain, Second Edition Foundations of Blockchain Blockchain for Enterprise   Hadoop and big data If Blockchain is still receiving considerable hype, then big data has been slipping away quietly for the last couple of years. Of course, it hasn’t quite disappeared - data is now a normal part of reality. It’s just that trends like artificial intelligence and cloud have emerged to take its place and place even greater emphasis on what we’re actually doing with that data, and how we’re doing it. Read more: Why is Hadoop dying? With this change in emphasis, we’ve also seen the slow death of Hadoop. In a world that increasingly cloud native, it simply doesn’t make sense to run data on a cluster of computers - instead, leveraging public cloud makes much more sense. You might, for example, use Amazon S3 to store your data and then Spark, Flink, or Kafka for processing. Of course, the advantages of cloud are well documented. But in terms of big data, cloud allows for much greater elasticity in terms of scale, greater speed, and makes it easier to perform machine learning thanks to in built features that a number of the leading cloud vendors provide. What you should learn instead of Hadoop The future of big data largely rests in tools like Spark, Flink and Kafka. But it’s important to note it’s not really just about a couple of tools. As big data evolves, focus will need to be on broader architectural questions about what data you have, where it needs to be stored and how it should be used. Arguably, this is why ‘big data’ as a concept will lose valence with the wider community - it will still exist, but will be part of parcel of everyday reality, it won’t be separate from everything else we do. Learn the tools that will drive big data in the future: Apache Spark 2: Data Processing and Real-Time Analytics [Learning Path] Apache Spark: Tips, Tricks, & Techniques [Video] Big Data Processing with Apache Spark Learning Apache Flink Apache Kafka 1.0 Cookbook Why I could be wrong about Hadoop Hadoop 3 is on the horizon and could be the saving grace for Hadoop. Updates suggest that this new iteration is going to be much more amenable to cloud architectures. Learn Hadoop 3: Apache Hadoop 3 Quick Start Guide Mastering Hadoop 3         R 12 to 18 months ago debate was raging over whether R or Python was the best language for data. As we approach the end of 2018, that debate seems to have all but disappeared, with Python finally emerging as the go-to language for anyone working with data. There are a number of reasons for this: Python has the best libraries and frameworks for developing machine learning models. TensorFlow, for example, which runs on top of Keras, makes developing pretty sophisticated machine and deep learning systems relatively easy. R, however, simply can’t match Python in this way. With this ease comes increased adoption. If people want to ‘get into’ machine learning and artificial intelligence, Python is the obvious choice. This doesn’t mean R is dead - instead, it will continue to be a language that remains relevant for very specific use cases in research and data analysis. If you’re a researcher in a university, for example, you’ll probably be using R. But it at least now has to concede that it will never have the reach or levels of growth that Python has. What you should learn instead of R This is obvious - if you’re worried about R’s flexibility and adaptability for the future, you need to learn Python. But it’s certainly not the only option when it comes to machine learning - the likes of Scala and Go could prove useful assets on your CV, for machine learning and beyond. Learn a new way to tackle contemporary data science challenges: Python Machine Learning - Second Edition Hands-on Supervised Machine Learning with Python [Video] Machine Learning With Go Scala for Machine Learning - Second Edition       Why I could be wrong about R R is still an incredibly useful language when it comes to data analysis. Particularly if you’re working with statistics in a variety of fields, it’s likely that it will remain an important part of your skill set for some time. Check out these R titles: Getting Started with Machine Learning in R [Video] R Data Analysis Cookbook - Second Edition Neural Networks with R         IoT IoT is a term that has been hanging around for quite a while now. But it still hasn’t quite delivered on the hype that it originally received. Like Blockchain, 2019 is perhaps IoT’s make or break year. Even if it doesn’t develop into the sort of thing it promised, it could at least begin to break down into more practical concepts - like, for example edge computing. In this sense, we’d stop talking about IoT as if it were a single homogenous trend about to hit the modern world, but instead a set of discrete technologies that can produce new types of products, and complement existing (literal) infrastructure. The other challenge that IoT faces in 2019 is that the very concept of a connected world depends upon decision making - and policy - beyond the world of technology and business. If, for example, we’re going to have smart cities, there needs to be some kind of structure in place on which some degree of digital transformation can take place. Similarly, if every single device is to be connected in some way, questions will need to be asked about how these products are regulated and how this data is managed. Essentially, IoT is still a bit of a wild west. Given the year of growing scepticism about technology, major shifts are going to be unlikely over the next 12 months. What to learn One way of approaching IoT is instead to take a step back and think about the purpose of IoT, and what facets of it are most pertinent to what you want to achieve. Are you interested in collecting and analyzing data? Or developing products that have in built operational intelligence. Once you think about it from this perspective, IoT begins to sound less like a conceptual behemoth, and something more practical and actionable. Why I could be wrong about IoT Immediate shifts in IoT might be slow, but it could begin to pick up speed in organizations that understand it could have a very specific value. In this sense, IoT is a little like Blockchain - it’s only really going to work if we can move past the hype, and get into the practical uses of different technologies. Check out some of our latest IoT titles: Internet of Things Programming Projects Industrial Internet Application Development Introduction to Internet of Things [Video] Alexa Skills Projects       Does anything really die in tech? You might be surprised at some of the entries on this list - others, not so much. But either way, it’s worth pointing out that ultimately nothing ever really properly disappears in tech. From a legacy perspective change and evolution often happens slowly, and in terms of innovation buzzwords and hype don’t simply vanish, they mature and influence developments in ways we might not have initially expected. What will really be important in 2019 is to be alive to these shifts, and give yourself the best chance of taking advantage of change when it really matters.
Read more
  • 0
  • 0
  • 7306

article-image-mips-open-sourced-under-mips-open-program-makes-the-semiconductor-space-and-soc-ones-to-watch-for-in-2019
Melisha Dsouza
18 Dec 2018
4 min read
Save for later

MIPS open sourced under ‘MIPS Open Program’, makes the semiconductor space and SoC, ones to watch for in 2019

Melisha Dsouza
18 Dec 2018
4 min read
On 17th December, Wave Computing announced that it will put MIPS on open source, with MIPS Instruction Set Architecture (ISA) and MIPS’ latest core R6 to be made available in the first quarter of 2019. With a vision to “accelerate the ability for semiconductor companies, developers and universities to adopt and innovate using MIPS for next-generation system-on-chip (SoC) designs”, Wave computings’ MIPS Open program will give participants full access to the most recent versions of the 32-bit and 64-bit MIPS ISA free of charge, without any licensing or royalty fees. Under this program, participants will have full access to the most recent versions of the 32-bit and 64-bit MIPS ISA free of charge – with no licensing or royalty fees. Additionally, participants in the MIPS Open program will be licensed under MIPS’ existing worldwide patents. Addressing the “lack of open source access to true industry-standard, patent-protected and silicon-proven RISC architectures”, Art Swift, president of Wave Computing’s MIPS IP Business claims that MIPS will bring to the open-source community “commercial-ready” instruction sets with “industrial-strength” architecture, where “Chip designers will have opportunities to design their own cores based on proven and well-tested instruction sets for any purposes.” Lee Flanagin, Wave’s senior vice president and chief business officer further added in the post that the MIPS Open initiative is a key part of Wave’s ‘AI for All’ vision. He says that “The MIPS-based solutions developed under MIPS Open will complement our existing and future MIPS IP cores that Wave will continue to create and license globally as part of our overall portfolio of systems, solutions and IP. This will ensure current and new MIPS customers will have a broad array of solutions from which to choose for their SoC designs, and will also have access to a vibrant MIPS development community and ecosystem.” The MIPS Open initiative further will encourage the adoption of MIPS while helping customers develop new, MIPS-compatible solutions for a variety of emerging market applications from third-party tool vendors, software developers and universities. RISC-V versus MIPS? Considering that the RISC-V instruction set architecture is also free and open for anyone to use,  the internet went abuzz with speculations about competition between RISC-V and MIPS and the potential future of both. Hacker news also saw comments like: “Had this happened two or three years ago, RISC-V would have never been born.” In an interview to EE Times, Rupert Baines, CEO of UltraSoC, said that “Given RISC-V’s momentum, MIPS going open source is an interesting, shrewd move.”  He observed, “MIPS already has a host of quality tools and software environment. This is a smart way to amplify MIPS’ own advantage, without losing much.” Linley Gwennap, principal analyst at the Linley Group compared the two chips and stated that, “The MIPS ISA is more complete than RISC-V. For example, it includes DSP and SIMD extensions, which are still in committee for RISC-V.”. Calling the MIPS software development tools more mature than RISC-V, he went on to list down the benefits of MIPS over RISC: “MIPS also provides patent protection and a central authority to avoid ISA fragmentation, both of which RISC-V lacks. These factors give MIPS an advantage for commercial implementations, particularly for customer-facing cores.” Hacker News and Twitter are bustling with comments on this move by Wave computing. Opinions are split over which architecture is more preferable to use. For the most part, customers appear excited about this news. https://twitter.com/corkmork/status/1074857920293027840 https://twitter.com/plessl/status/1074778310025076736 You can head over to Wave Computing’s official blog to know more about this announcement. The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA) Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets  
Read more
  • 0
  • 0
  • 2645

article-image-how-to-stop-hackers-from-messing-with-your-home-network-iot
Guest Contributor
16 Oct 2018
8 min read
Save for later

How to stop hackers from messing with your home network (IoT)

Guest Contributor
16 Oct 2018
8 min read
This week, NCCIC, in collaboration with cybersecurity authorities of Australia, Canada, New Zealand, the United Kingdom, and the United States released a joint ‘Activity Alert Report’. What is alarming in the findings is that a majority of sophisticated exploits on secure networks are being carried out by attackers using freely available tools that find loopholes in security systems. The Internet of Things (IoT) is broader than most people realize. It involves diverse elements that make it possible to transfer data from a point of origin to a destination. Various Internet-ready mechanical devices, computers, and phones are part of your IoT, including servers, networks, cryptocurrency sites, down to the tracking chip in your pet’s collar. Your IoT does not require a person to person interaction. It also doesn’t require a person to device interaction, but it does require device to device connections and interactions. What does all this mean to you? It means hackers have more points of entry into your personal IoT that you ever dreamed of. Here are some of the ways they can infiltrate your personal IoT devices along with some suggestions on how to keep them out. Your home network How many functions are controlled via a home network? From your security system to activating lighting at dusk to changing the setting on the thermostat, many people set up automatic tasks or use remote access to manually adjust so many things. It’s convenient, but it comes with a degree of risk. (Image courtesy of HotForSecurity.BitDefender.com) Hackers who are able to detect and penetrate the wireless home network via your thermostat or the lighting system eventually burrow into other areas, like the hard drive where you keep financial documents. Before you know it, you're a ransomware victim. Too many people think their OS firewall will protect them but by the time a hacker runs into that, they’re already in your computer and can jump out to the wireless devices we’ve been talking about. What can you do about it? Take a cue from your bank. Have you ever tried to access your account from a device that the bank doesn’t recognize? If so, then you know the bank’s system requires you to provide additional means of identification, like a fingerprint scan or answering a security question. That process is called multifactor authentication. Unless the hacker can provide more information, the system blocks the attempt. Make sure your home network is setup to require additional authentication when any device other than your phone, home computer, or tablet is used. Spyware/Malware from websites and email attachments Hacking via email attachments or picking up spyware and malware by visits to unsecured sites are still possible. Since these typically download to your hard drive and run in the background, you may not notice anything at all for a time. All the while, your data is being harvested. You can do something about it. Keep your security software up to date and always load the latest release of your firewall. Never open attachments with unusual extensions even if they appear to be from someone you know. Always use your security software to scan attachments of any kind rather than relying solely on the security measures employed by your email client. Only visit secure sites. If the site address begins with “http” rather than “https” that’s a sign you need to leave it alone. Remember to update your security software at least once a week. Automatic updates are a good thing. Don’t forget to initiate a full system scan at least once a week, even if there are no apparent problems. Do so after making sure you've downloaded and installed the latest security updates. Your pet’s microchip The point of a pet chip is to help you find your pet if it wanders away or is stolen. While not GPS-enabled, it’s possible to scan the chip on an animal who ends up in an animal shelter or clinic and confirm a match. Unfortunately, that function is managed over a network. That means hackers can use it as a gateway. (Image courtesy of HowStuffWorks.com) Network security determines how vulnerable you are in terms of who can access the databases and come up with a match. Your best bet is to find out what security measures the chip manufacturer employs, including how often those measures are updated. If you don’t get straight answers, go with a competitor’s chip. Your child’s toy Have you ever heard of a doll named Cayla? It’s popular in Germany and also happens to be Wi-Fi enabled. That means hackers can gain access to the camera and microphone included in the doll design. Wherever the doll is carried, it’s possible to gather data that can be used for all sorts of activities. That includes capturing information about passwords, PIN codes, and anything else that’s in the range of the camera or the microphone. Internet-enabled toys need to be checked for spyware regularly. More manufacturers provide detectors in the toy designs. You may still need to initiate those scans and keep the software updated. This increases the odds that the toy remains a toy and doesn’t become a spy for some hacker. Infection from trading electronic files It seems pretty harmless to accept a digital music file from a friend. In most cases, there is no harm. Unfortunately, your friend’s digital music collection may already be corrupted. Once you load that corrupted file onto your hard drive, your computer is now part of a botnet network running behind your own home network. (Image courtesy of: AlienVault.com) Whenever you receive a digital file, either via email or by someone stopping by with a jump drive, always use your security software to scan it before downloading it into your system. The software should be able to catch the infection. If you find anything, let your friend know so he or she can take steps to debug the original file. These are only a few examples of how your IoT can be hacked and lead to data theft or corruption. As with any type of internet-based infection, there are new strategies developed daily. How Blockchain might help There’s one major IoT design flaw that allows hackers to easily get into a system and that is the centralized nature of the network’s decision-making. There is a single point of control through which all requests are routed and decisions are made. A hacker has only to penetrate this singular authority to take control of everything because individual devices can’t decide on their own what constitutes a threat. Interestingly enough, the blockchain technology that underpins Bitcoin and many other cryptocurrencies might eventually provide a solution to the extremely hackable state of the IoT as presently configured. While not a perfect solution, the decentralized nature of blockchain has a lot of companies spending plenty on research and development for eventual deployment to a host of uses, including the IoT. The advantage blockchain technology offers to IoT is that it removes the single point of control and allows each device on a network to work in conjunction with the others to detect and thwart hack attempts. Blockchain works through group consensus. This means that in order to take control of a system, a bad actor would have to be able to take control of a majority of the devices all at once, which is an exponentially harder task than breaking through the single point of control model. If a blockchain-powered IoT network detects an intrusion attempt and the group decides it is malicious, it can be quarantined so that no damage occurs to the network. That’s the theory anyway. Since blockchain is an almost brand new technology, there are hurdles to be overcome before it can be deployed on a wide scale as a solution to IoT security problems. Here are a few: Computing power costs - It takes plenty of computer resources to run a blockchain. More than the average household owner is willing to pay right now. That’s why the focus is on industrial IoT uses at present. Legal issues - If you have AI-powered devices making decisions on their own, who will bear ultimate responsibility when things go wrong? Volatility - The development around blockchain is young and unpredictable. Investing in a solution right now might mean having to buy all new equipment in a year. Final Thoughts One thing is certain. We have a huge problem (IoT security) and what might eventually offer a solid solution (blockchain technology). Expect the path to get from here to there to be filled with potholes and dead ends but stay tuned. The potential for a truly revolutionary technology to come into its own is definitely in the mix. About Gary Stevens Gary Stevens is a front-end developer. He's a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Defending your business from the next wave of cyberwar: IoT Threats. 5 DIY IoT projects you can build under $50 IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall
Read more
  • 0
  • 0
  • 5680

article-image-boston-dynamics-adds-military-grade-mortar-parkour-skills-to-its-popular-humanoid-atlas-robot
Natasha Mathur
12 Oct 2018
2 min read
Save for later

Boston Dynamics adds military-grade mortor (parkour) skills to its popular humanoid Atlas Robot

Natasha Mathur
12 Oct 2018
2 min read
Boston Dynamics, a robotics design company, has now added parkour skills to its popular and advanced humanoid robot, named Atlas. Parkour is a training discipline that involves using movement developed by the military obstacle course training. The company posted a video on YouTube yesterday that shows Atlas jumping over a log, climbing and leaping up staggered tall boxes mimicking a parkour runner in the military. “The control software (in Atlas) uses the whole body including legs, arms, and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace.  (Step height 40 cm.) Atlas uses computer vision to locate itself with respect to visible markers on the approach to hit the terrain accurately”, mentioned Boston Dynamics in yesterday’s video. Atlas Parkour  The original version of Atlas was made public, back in 2013, and was created for the United States Defense Advanced Research Projects Agency (DARPA). It quickly became famous for its control system. This advanced control system robot coordinates the motion of its arms, torso, and legs to achieve whole-body mobile manipulation. Boston Dynamics then unveiled the next generation of its Atlas robot, back in 2016. This next-gen electrically powered and hydraulically actuated Atlas Robot was capable of walking on the snow, picking up boxes, and getting up by itself after a fall. It was designed mainly to operate outdoors and inside buildings. Atlas, the next-generation Atlas consists of sensors embedded in its body and legs to balance. It also comprises LIDAR and stereo sensors in its head. This helps it avoid obstacles, assess the terrain well and also help it with navigation. Boston Dynamics has a variety of other robots such as Handle, SpotMini, Spot, LS3, WildCat, BigDog, SandFlea, and Rhex. These robots are capable of performing actions that range from doing backflips, opening (and holding) doors, washing the dishes, trail running, and lifting boxes among others. For more information, check out the official Boston Dynamics Website. Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019 Meet CIMON, the first AI robot to join the astronauts aboard ISS What we learned at the ICRA 2018 conference for robotics & automation
Read more
  • 0
  • 0
  • 4193
article-image-apple-joins-the-thread-group-signaling-its-smart-home-ambitions-with-homekit-siri-and-other-iot-products
Bhagyashree R
09 Aug 2018
3 min read
Save for later

Apple joins the Thread Group, signaling its Smart Home ambitions with HomeKit, Siri and other IoT products

Bhagyashree R
09 Aug 2018
3 min read
Apple is now a part of the Thread Group’s list of members, alongside its top rivals - Nest (a subsidiary of Google) and Amazon. This indicates some advancements in their HomeKit software framework and inter-device communication between iOS devices. Who is the Thread Group? The Thread Group is a non-profit company who have developed the network protocol Thread, with the aim of being the best way to connect and control IoT products. These are the features that enable them to do so: Mesh networking: It uses mesh network design, connecting hundreds of products securely and reliably, which also means no single point of failure. Secure: It provides security at network and application layers. To ensure only authorized devices join the network, it uses product install codes. They use AES encryption to close security holes that exist in other wireless protocols and smartphone-era authentication scheme. Battery friendly: Based on the power efficient IEEE 802.15.4 MAC/PHY, it ensures extremely low power consumption. Short messaging, streamlined routing protocol, use of low power wireless system-on-chips also makes it battery friendly. Based on IPv6: It is interoperable by design using proven, open standards and IPv6 technology with 6LoWPAN (short for, IPv6 over Low-Power Wireless Personal Area Networks) as the foundation. 6LoWPAN is an IPv6 based low-power wireless personal area network which is comprises of devices that conform to the IEEE 802.15.4-2003 standard Scalable: It can scale upto 250+ devices into a single network supporting multiple hops. What this membership brings to Apple? The company has not revealed their plans yet, but nothing is stopping us from imagining what they possibly could do with Thread. According to a redditor, the following are some potential use of Thread by Apple HomeKit by Apple uses WiFi and Bluetooth as its wireless protocols. WiFi is very power hungry and Bluetooth is short-ranged. With Thread’s mesh network and power-efficient design this problem could be solved. Apple only allows certain products to operate on battery, requiring others to be plugged into power constantly, HomeKit cameras, for instance. Critical to both Apple and extended-use home devices, Thread promises “extremely low power consumption.” Apple could have plans to provide support for the number of IoT smart home devices the HomePod is capable of connecting to with Thread. With the support of Thread, iOS devices could guarantee better inter-device Siri communications, more reliable continuity features, and secure geo-fencing. Apple joining the group could mean that it may become open to more hardware when it comes to its HomeKit and also become reasonable from a cost perspective in the smart home area. Apple releases iOS 12 beta 2 with screen time and battery usage updates among others macOS Mojave: Apple updates the Mac experience for 2018 Apple stocks soar just shy of $1 Trillion market cap as revenue hits $53.3 Billion in Q3 earnings 2018
Read more
  • 0
  • 0
  • 3777

article-image-how-to-secure-your-raspberry-pi-board-tutorial
Gebin George
13 Jul 2018
10 min read
Save for later

How to secure your Raspberry Pi board [Tutorial]

Gebin George
13 Jul 2018
10 min read
In this Raspberry Pi tutorial,  we will learn to secure our Raspberry Pi board. We will also learn to implement and enable the security features to make the Pi secure. This article is an excerpt from the book, Internet of Things with Raspberry Pi 3,  written by Maneesh Rao. Changing the default password Every Raspberry Pi that is running the Raspbian operating system has the default username pi and default password raspberry, which should be changed as soon as we boot up the Pi for the first time. If our Raspberry Pi is exposed to the internet and the default username and password has not been changed, then it becomes an easy target for hackers. To change the password of the Pi in case you are using the GUI for logging in, open the menu and go to Preferences and Raspberry Pi Configuration, as shown in Figure 10.1: Within Raspberry Pi Configuration under the System tab, select the change password option, which will prompt you to provide a new password. After that, click on OK and the password is changed (refer Figure 10.2): If you are logging in through PuTTY using SSH, then open the configuration setting by running the sudo raspi-config command, as shown in Figure 10.3: On successful execution of the command, the configuration window opens up. Then, select the second option to change the password and finish, as shown in Figure 10.4: It will prompt you to provide a new password; you just need to provide it and exit. Then, the new password is set. Refer to Figure 10.5: Changing the username All Raspberry Pis come with the default username pi, which should be changed to make it more secure. We create a new user and assign it all rights, and then delete the pi user. To add a new user, run the sudo adduser adminuser command in the terminal. It will prompt for a password; provide it, and you are done, as shown in Figure 10.6: Now, we will add our newly created user to the sudo group so that it has all the root-level permissions, as shown in Figure 10.7: Now, we can delete the default user, pi, by running the sudo deluser pi command. This will delete the user, but its repository folder /home/pi will still be there. If required, you can delete that as well. Making sudo require a password When a command is run with sudo as the prefix, then it'll execute it with superuser privileges. By default, running a command with sudo doesn't need a password, but this can cost dearly if a hacker gets access to Raspberry Pi and takes control of everything. To make sure that a password is required every time a command is run with superuser privileges, edit the 010_pi-nopasswd file under /etc/sudoers.d/ by executing the command shown in Figure 10.8: This command will open up the file in the nano editor; replace the content with pi ALL=(ALL) PASSWD: ALL, and save it. Updated Raspbain operating system To get the latest security updates, it is important to ensure that the Raspbian OS is updated with the latest version whenever available. Visit https://www.raspberrypi.org/documentation/raspbian/updating.md to learn the steps to update Raspbain. Improving SSH security SSH is one of the most common techniques to access Raspberry Pi over the network and it becomes necessary to use if you want to make it secure. Username and password security Apart from having a strong password, we can allow and deny access to specific users. This can be done by making changes in the sshd_config file. Run the sudo nano /etc/ssh/sshd_config command. This will open up the sshd_config file; then, add the following line(s) at the end to allow or deny specific users: To allow users, add the line: AllowUsers tom john merry To deny users, add this line: DenyUsers peter methew For these changes to take effect, it is necessary to reboot the Raspberry Pi. Key-based authentication Using a public-private key pair for authenticating a client to an SSH server (Raspberry Pi), we can secure our Raspberry Pi from hackers. To enable key-based authentication, we first need to generate a public-private key pair using tools called PuTTYgen for Windows and ssh-keygen for Linux. Note that a key pair should be generated by the client and not by Raspberry Pi. For our purpose, we will use PuTTYgen for generating the key pair. Download PuTTY from the following web link: Note that puTTYgen comes with PuTTY, so you need not install it separately. Open the puTTYgen client and click on Generate, as shown in Figure 10.9: Next, we need to hover the mouse over the blank area to generate the key, as highlighted in Figure 10.10: Once the key generation process is complete, there will be an option to save the public and private keys separately for later use, as shown in Figure 10.11—ensure you keep your private key safe and secure: Let's name the public key file rpi_pubkey, and the private key file rpi_privkey.ppk and transfer the public key file rpi_pubkey from our system to Raspberry. Log in to Raspberry Pi and under the user repository, which is /home/pi in our case, create a special directory with the name .ssh, as shown in Figure 10.12: Now, move into the .ssh directory using the cd command and create/open the file with the name authorized_keys, as shown in Figure 10.13: The nano command opens up the authorized_keys file in which we will copy the content of our public key file, rpi_pubkey. Then, save (Ctrl + O) and close the file (Ctrl + X). Now, provide the required permissions for your pi user to access the files and folders. Run the following commands to set permissions: chmod 700 ~/.ssh/ (set permission for .ssh directory) chmod 600 ~/.ssh/authorized_keys (set permission for key file) Refer to Figure 10.14, which shows the permissions before and after running the chmod commands: Finally, we need to disable the password logins to avoid unauthorized access by editing the /etc/ssh/sshd_config file. Open the file in the nano editor by running the following command: sudo nano etc/ssh/sshd_config In the file, there is a parameter #PasswordAuthentication yes. We need to uncomment the line by removing # and setting the value to no: PasswordAuthentication no Save (Ctrl + O) and close the file (Ctrl + X). Now, password login is prohibited and we can access the Raspberry Pi using the key file only. Restart Raspberry Pi to make sure all the changes come into effect with the following command: sudo reboot Here, we are assuming that both Raspberry Pi and the system that is being used to log in to Pi are one and the same. Now, you can log in to Raspberry Pi using PuTTY. Open the PuTTY terminal and provide the IP address of your Pi. On the left-hand side of the PuTTY window, under Category, expand SSH as shown in Figure 10.15: Then, select Auth, which will provide the option to browse and upload the private key file, as shown in Figure 10.16: Once the private key file is uploaded, click on Open and it will log in to Raspberry Pi successfully without any password. Setting up a firewall There are many firewall solutions available for Linux/Unix-based operating systems, such as Raspbian OS in the case of Raspberry Pi. These firewall solutions have IP tables underneath to filter packets coming from different sources and allow only the legitimate ones to enter the system. IP tables are installed in Raspberry Pi by default, but are not set up. It is a bit tedious to set up the default IP table. So, we will use an alternate tool, Uncomplicated Fire Wall (UFW), which is extremely easy to set up and use ufw. To install ufw, run the following command (refer to Figure 10.17): sudo apt install ufw Once the download is complete, enable ufw (refer to Figure 10.18) with the following command: sudo ufw enable If you want to disable the firewall (refer to Figure 10.20), use the following command: sudo ufw disable Now, let's see some features of ufw that we can use to improve the safety of Raspberry Pi. Allow traffic only on a particular port using the allow command, as shown in Figure 10.21: Restrict access on a port using the deny command, as shown in Figure 10.22: We can also allow and restrict access for a specific service on a specific port. Here, we will allow tcp traffic on port 21 (refer to Figure 10.23): We can check the status of all the rules under the firewall using the status command, as shown in Figure 10.24: Restrict access for particular IP addresses from a particular port. Here, we deny access to port 30 from the IP address 192.168.2.1, as shown in Figure 10.25: To learn more about ufw, visit https://www.linux.com/learn/introduction-uncomplicated-firewall-ufw. Fail2Ban At times, we use our Raspberry Pi as a server, which interacts with other devices that act as a client for Raspberry Pi. In such scenarios, we need to open certain ports and allow certain IP addresses to access them. These access points can become entry points for hackers to get hold of Raspberry Pi and do damage. To protect ourselves from this threat, we can use the fail2ban tool. This tool monitors the logs of Raspberry Pi traffic, keeps a check on brute-force attempts and DDOS attacks, and informs the installed firewall to block a request from that particular IP address. To install Fail2Ban, run the following command: sudo apt install fail2ban Once the download is completed successfully, a folder with the name fail2ban is created at path /etc. Under this folder, there is a file named jail.conf. Copy the content of this file to a new file and name it jail.local. This will enable fail2ban on Raspberry Pi. To copy, you can use the following command: sudo /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Now, edit the file using the nano editor: sudo nano /etc/fail2ban/jail.local Look for the [ssh] section. It has a default configuration, as shown in Figure 10.26: This shows that Fail2Ban is enabled for ssh. It checks the port for ssh connections, filters the traffic as per conditions set under in the sshd configuration file located at path etcfail2banfilters.dsshd.conf, parses the logs at /var/log/auth.log for any suspicious activity, and allows only six retries for login, after which it restricts that particular IP address. The default action taken by fail2ban in case someone tries to hack is defined in jail.local, as shown in Figure 10.27: This means when the iptables-multiport action is taken against any malicious activity, it runs as per the configuration in /etc/fail2ban/action.d/iptables-multiport.conf. To summarize, we learned how to secure our Raspberry Pi single-board. If you found this post useful, do check out the book Internet of Things with Raspberry Pi 3, to interface various sensors and actuators with Raspberry Pi 3 to send data to the cloud. Build an Actuator app for controlling Illumination with Raspberry Pi 3 Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 19764

article-image-amds-293-million-jv-with-chinese-chipmaker-hygon-starts-production-of-x86-cpus
Natasha Mathur
10 Jul 2018
3 min read
Save for later

AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs

Natasha Mathur
10 Jul 2018
3 min read
Chinese chip producer Hygon begins production of China-made x86 processors named “Dhyana”. These processors use AMD’s Zen microarchitecture and are the result of the licensing deal between AMD and its Chinese partners. Hygon has started shipping the new “Dhyana” x86 CPUs. According to the official statements made by AMD, it does not permit the selling of the final chip designs to its partners in China instead it encourages their partners to design their own processors that suit the needs of the Chinese server market. This is an effort to break China’s dependency on the foreign technology market. In 2016, AMD announced that it is working on a joint project in China to develop processors. This provided AMD with a $293 million in cash by Tianjin Haiguang Advanced Technology Investment Co. (THATIC). THATIC includes AMD as well as the Chinese Academy of Sciences. What’s interesting is that AMD managed to establish a license allowing Chinese processor manufacturers to develop and sell x86 processors despite the fact that Intel was blocked from selling Xeon processors to China in 2015 by the Obama administration. This happened over concerns that the chips would help China’s nuclear weapon programs. Dhyana processors are focusing on embedded applications currently. It is a System on chip ( SoC) instead of a socketed chip. But this design doesn’t limit the Dhyana processors from being used in high-performance or data center applications, which usually leverages Intel Xeon and other server processors. Also, Linux kernels developers have stated that the x86 processors are very close in design to that of AMD’s EPYC. In fact, when moving the Linux kernel code for EPYC processors to the Hygon chips, it required fewer than 200 new lines of code, according to a report from Michael Larabel of Phoronix. The only difference between the two is the vendor IDs and family series. Apart from AMD, there are other chip-producing ventures that China is heavily engrossed in. One such venture is Zhaoxin Semiconductor that is working to manufacture x86 chips through a partnership with VIA. China is making continuous efforts to free the country from US interventions and to change their long-term processor market. There are implications that most of the x86 processors are 14nm chips, but there isn’t much information available on the capabilities of the Dhyana family. Also, other details of their manufacturing characteristics are still to be known. Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo
Read more
  • 0
  • 0
  • 3580
article-image-build-actuator-application-with-raspberry-pi-3
Gebin George
28 Jun 2018
10 min read
Save for later

Build an Actuator app for controlling Illumination with Raspberry Pi 3

Gebin George
28 Jun 2018
10 min read
In this article, we will look at how to build an actuator application for controlling illuminations. This article is an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you design and implement IoT solutions with single board computers. Preparing our project Let's create a new Universal Windows Platform application project. This time, we'll call it Actuator. We can also use the Raspberry Pi 3, even though we will only use the relay in this project. To make the persistence of application states even easier, we'll also include the latest version of the NuGet package Waher.Runtime.Settings in the project. It uses the underlying object database defined by Waher.Persistence to persist application settings. Defining control parameters Actuators come in all sorts, types, and sizes, from the very complex to the very simple. While it would be possible to create a proprietary format that configures the actuator in a bulk operation, such a method is doomed to fail if you aim for any type of interoperable communication. Since the internet is based on interoperability as a core principle, we should consider this from the start, during the design phase. Interoperability means devices can learn to operate together, even if they are from different manufacturers. To achieve this, devices must be able to describe what they can do, in a way that each participant understands. To be able to do this, we need a way to break down (divide and conquer) a complex actuator into parts that are easily described and understood. One way is to see an actuator as a collection of control parameters. Each control parameter is a named parameter with a simple and recognizable data type. (In the same way, we can see a sensor as a collection of sensor data fields.): For our example, we will only need one control parameter: A Boolean control parameter controlling the state of our relay. We'll just call it Output, for simplicity. Understanding relays Relays, simply put, are electric switches that we can control using a small output signal. They're perfect for small controllers, like Raspberry Pi, to switch other circuits with higher voltages on and off. The simplest example is to use a relay to switch a lamp on and off. We can't light the lamp using the voltage available to us in Raspberry Pi, but we can use a relay as a switch to control the lamp. The principal part of a normal relay is a coil. When electricity runs through it, it magnetizes an iron core, which in turn moves a lever from the Normally Closed (NC) connector to the Normally Open (NO) connector. When electricity is cut, a spring returns the lever from the NO connector to the NC connector. This movement of the lever from one connector to the other causes a characteristic clicking sound. This tells you that the relay works. The lever in turn is connected to the Common Ground (COM) connector. The following figure illustrates how a simple relay is constructed. We control the flow of the current through the coil (L1) using our output SIGNAL (D1 in our case). Internally, in the relay, a resistor (R1) is placed before the base pin of the transistor (T1), to adapt the signal voltage to an appropriate level. When we connect or cut the current through the coil, it will induce a reverse current. This may be harmful for the transistor when the current is being cut. For that reason, a fly-back diode (D1) is added, allowing excess current to be fed back, avoiding harm to the transistor: Connecting our lamp Now that we know how a relay works, it's relatively easy to connect our lamp to it. Since we want the lamp to be illuminated when we turn the relay on (set D1to HIGH), we will use the NO and COM connectors, and let the NC connector be. If the lamp has a normal two-wire AC cable, we can insert the relay into the AC circuit by simply cutting one of the wires, inserting one end into the NO connector and the other into the COM connector, as is illustrated in the following figure: Be sure to follow appropriate safety regulations when working with electricity. Connecting an LED An alternative to working with the alternating current (AC) is to use a low-power direct current (DC) source and an LED to simulate a lamp. You can connect the COM connector to a resistor and an LED, and then to ground (GND) on one end, and the NO directly to the 5V or 3.3V source on the Raspberry Pi on the other end. The size of the resistor is determined by how much current the LED needs to light up, and the voltage source you choose. If the LED needs 20 mA, and you connect it to a 5V source, Ohms Law tells us we need an R = U/I = 5V/0.02A = 250 Ω resistor. The following figure illustrates this: Controlling the output The relay is connected to our digital output pin 9 on the Arduino board. As such, controlling it is a simple call to the digitalWrite() method on our arduino object. Since we will need to perform this control action from various locations in code in later chapters, we'll create a method for it: internal async Task SetOutput(bool On, string Actor) { if (this.arduino != null) { this.arduino.digitalWrite(9, On ? PinState.HIGH : PinState.LOW); The first parameter simply states the new value of the control parameter. We'll add a second parameter that describes who is making the requested change. This will come in handy later, when we allow online users to change control parameters. Persisting control parameter states If the device reboots for some reason, for instance after a power outage, it's normally desirable that it returns to the state it was in before it shut down. For this, we need to persist the output value. We can use the object database defined in Waher.Persistence and Waher.Persistence.Files for this. But for simple control states, we don't need to create our own data-bearing classes. That has already been done by Waher.Runtime.Settings. To use it, we first include the NuGet, as described earlier. We must also include its assembly when we initialize the runtime inventory, which is used by the object database: Types.Initialize( typeof(FilesProvider).GetTypeInfo().Assembly, typeof(App).GetTypeInfo().Assembly, typeof(RuntimeSettings).GetTypeInfo().Assembly); Depending on the build version selected when creating your UWP application, different versions of .NET Standard will be supported. Build 10586 for instance, only supports .NET Standard up to v1.4. Build 16299, however, supports .NET Standard up to v2.0. The Waher.Runtime.Inventory.Loader library, available as a NuGet package, provides the capability to loop through existing assemblies in a simple manner, but it requires support for .NET Standard 1.5. You can call its TypesLoader.Initialize() method to initialize Waher.Runtime.Inventory with all assemblies available in the runtime. It also dynamically loads all permitted assemblies available in the application folder that have not been loaded. Saving the current control state is then simply a matter of calling the Set() or SetAsync() methods on the static RuntimeSettings class, defined in the Waher.Runtime.Settings namespace: await RuntimeSettings.SetAsync("Actuator.Output", On); During the initialization of the device, we then call the Get() or GetAsync() methods to get the last value, if it exists. If it does not exist, a default value we define is returned: bool LastOn = await RuntimeSettings.GetAsync("Actuator.Output", false); this.arduino.digitalWrite(1, LastOn ? PinState.HIGH : PinState.LOW); Logging important control events In distributed IoT control applications, it's vitally important to make sure unauthorized access to the system is avoided. While we will dive deeper into this subject in later chapters, one important tool we can start using it to log everything of a security interest in the event log. We can decide what to do with the event log later, whether we want to analyze or store it locally or distribute it in the network for analysis somewhere else. But unless we start logging events of security interest directly when we develop, we risk forgetting logging certain events later. So, let's log an event every time the output is set: Log.Informational("Setting Control Parameter.", string.Empty, Actor ?? "Windows user", new KeyValuePair<string, object>("Output", On)); If the Actor parameter is null, we assume the control parameter has been set from the Windows GUI. We use this fact, to update the window if the change has been requested from somewhere else: if (Actor != null) await MainPage.Instance.OutputSet(On); Using Raspberry Pi GPIO pins directly The Raspberry Pi can also perform input and output without an Arduino board. But the General-Purpose Input/Output (GPIO) pins available only supports digital input and output. Since the relay module is controlled through a digital output, we can connect it directly to the Raspberry Pi, if we want. That way, we don't need the Arduino board. (We wouldn't be able to test-run the application on the local machine either, though.) Checking whether GPIO is available GPIO pins are accessed through the GpioController class defined in the Windows.Devices.Gpio namespace. First, we must check that GPIO is available on the machine. We do this by getting the default controller, and checking whether it's available: gpio = GpioController.GetDefault(); if (gpio != null) { ... } else Log.Error("Unable to get access to GPIO pin " + gpioOutputPin.ToString()); Initializing the GPIO output pin Once we have access to the controller, we can try to open exclusive access to the GPIO pin we've connected the relay to: if (gpio.TryOpenPin(gpioOutputPin, GpioSharingMode.Exclusive, out this.gpioPin, out GpioOpenStatus Status) && Status == GpioOpenStatus.PinOpened) { ... } else Log.Error("Unable to get access to GPIO pin " + gpioOutputPin.ToString()); Through the GpioPin object gpioPin, we can now control the pin. The first step is to set the operating mode for the pin. This is done by calling the SetDriveMode() method. There are many different modes a pin can be set to, not all necessarily supported by the underlying firmware and hardware. To check that a mode is supported, call the IsDriveModeSupported() method first: if (this.gpioPin.IsDriveModeSupported(GpioPinDriveMode.Output)) { This.gpioPin.SetDriveMode(GpioPinDriveMode.Output); ... } else Log.Error("Output mode not supported for GPIO pin " + gpioOutputPin.ToString()); There are various output modes available: Output, OutputOpenDrain, OutputOpenDrainPullUp, OutputOpenSource, and OutputOpenSourcePullDown. The code documentation for each flag describes the particulars of each option. Setting the GPIO pin output To set the actual output value, we call the Write() method on the pin object: bool LastOn = await RuntimeSettings.GetAsync("Actuator.Output", false); this.gpioPin.Write(LastOn ? GpioPinValue.High : GpioPinValue.Low); We need to make a similar change in the SetOutput() method. The Actuator project in the MIOT repository uses the Arduino use case by default. The GPIO code is also available through conditional compiling. It is activated by uncommenting the GPIO switch definition on the first row of the App.xaml.cs file. You can also perform Digital Input using principles similar to the preceding ones, with some differences. First, you select an input drive mode: Input, InputPullUp or InputPullDown. You then use the Read() method to read the current state of the pin. You can also use the ValueChanged event to get a notification whenever the input pin changes value. We saw how to create a simple actuator app for the Raspberry Pi using C#. If you found our post useful, do check out this title Mastering Internet of Things, to build complex projects using motions detectors, controllers, sensors, and Raspberry Pi 3. Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Build your first Raspberry Pi project Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless
Read more
  • 0
  • 0
  • 4023

article-image-build-google-cloud-iot-application
Gebin George
27 Jun 2018
19 min read
Save for later

Build an IoT application with Google Cloud [Tutorial]

Gebin George
27 Jun 2018
19 min read
In this tutorial, we will build a sample internet of things application using Google Cloud IoT. We will start off by implementing the end-to-end solution, where we take the data from the DHT11 sensor and post it to the Google IoT Core state topic. This article is an excerpt from the book, Enterprise Internet of Things Handbook, written by Arvind Ravulavaru. End-to-end communication To get started with Google IoT Core, we need to have a Google account. If you do not have a Google account, you can create one by navigating to this URL: https://accounts.google.com/SignUp?hl=en. Once you have created your account, you can login and navigate to Google Cloud Console: https://console.cloud.google.com. Setting up a project The first thing we are going to do is create a project. If you have already worked with Google Cloud Platform and have at least one project, you will be taken to the first project in the list or you will be taken to the Getting started page. As of the time of writing this book, Google Cloud Platform has a free trial for 12 months with $300 if the offer is still available when you are reading this chapter, I would highly recommend signing up: Once you have signed up, let's get started by creating a new project. From the top menu bar, select the Select a Project dropdown and click on the plus icon to create a new project. You can fill in the details as illustrated in the following screenshot: Click on the Create button. Once the project is created, navigate to the Project and you should land on the Home page. Enabling APIs Following are the steps to be followed for enabling APIs: From the menu on the left-hand side, select APIs & Services | Library as shown in the following screenshot: On the following screen, search for pubsub and select the Pub/Sub API from the results and we should land on a page similar to the following: Click on the ENABLE button and we should now be able to use these APIs in our project. Next, we need to enable the real-time API; search for realtime and we should find something similar to the following: Click on the ENABLE & button. Enabling device registry and devices The following steps should be used for enabling device registry and devices: From the left-hand side menu, select IoT Core and we should land on the IoT Core home page: Instead of the previous screen, if you see a screen to enable APIs, please enable the required APIs from here. Click on the & Create device registry button. On the Create device registry screen, fill the details as shown in the following table: Field Value Registry ID Pi3-DHT11-Nodes Cloud region us-central1 Protocol MQTT HTTP Default telemetry topic device-events Default state topic dht11 After completing all the details, our form should look like the following: We will add the required certificates later on. Click on the Create button and a new device registry will be created. From the Pi3-DHT11-Nodes registry page, click on the Add device button and set the Device ID as Pi3-DHT11-Node or any other suitable name. Leave everything as the defaults and make sure the Device communication is set to Allowed and create a new device. On the device page, we should see a warning as highlighted in the following screenshot: Now, we are going to add a new public key. To generate a public/private key pair, we need to have OpenSSL command line available. You can download and set up OpenSSL from here: https://www.openssl.org/source/. Use the following command to generate a certificate pair at the default location on your machine: openssl req -x509 -newkey rsa:2048 -keyout rsa_private.pem -nodes -out rsa_cert.pem -subj "/CN=unused" If everything goes well, you should see an output as shown here: Do not share these certificates anywhere; anyone with these certificates can connect to Google IoT Core as a device and start publishing data. Now, once the certificates are created, we will attach them to the device we have created in IoT Core. Head back to the device page of the Google IoT Core service and under Authentication click on Add public key. On the following screen, fill it in as illustrated: The public key value is the contents of rsa_cert.pem that we generated earlier. Click on the ADD button. Now that the public key has been successfully added, we can connect to the cloud using the private key. Setting up Raspberry Pi 3 with DHT11 node Now that we have our device set up in Google IoT Core, we are going to complete the remaining operation on Raspberry Pi 3 to send data. Pre-requisites The requirements for setting up Raspberry Pi 3 on a DHT11 node are: One Raspberry Pi 3: https://www.amazon.com/Raspberry-Pi-Desktop-Starter-White/dp/B01CI58722 One breadboard: https://www.amazon.com/Solderless-Breadboard-Circuit-Circboard-Prototyping/dp/B01DDI54II/ One DHT11 sensor: https://www.amazon.com/HiLetgo-Temperature-Humidity-Arduino-Raspberry/dp/B01DKC2GQ0 Three male-to-female jumper cables: https://www.amazon.com/RGBZONE-120pcs-Multicolored-Dupont-Breadboard/dp/B01M1IEUAF/ If you are new to the world of Raspberry Pi GPIO's interfacing, take a look at this Raspberry Pi GPIO Tutorial: The Basics Explained on YouTube: https://www.youtube.com/watch?v=6PuK9fh3aL8. The following steps are to be used for the setup process: Connect the DHT11 sensor to Raspberry Pi 3 as shown in the following diagram: Next, power up Raspberry Pi 3 and log in to it. On the desktop, create a new folder named Google-IoT-Device. Open a new Terminal and cd into this folder. Setting up Node.js Refer to the following steps to install Node.js: Open a new Terminal and run the following commands: $ sudo apt update $ sudo apt full-upgrade This will upgrade all the packages that need upgrades. Next, we will install the latest version of Node.js. We will be using the Node 7.x version: $ curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash - $ sudo apt install nodejs This will take a moment to install, and once your installation is done, you should be able to run the following commands to see the version of Node.js and npm: $ node -v $ npm -v Developing the Node.js device app Now, we will set up the app and write the required code: From the Terminal, once you are inside the Google-IoT-Device folder, run the following command: $ npm init -y Next, we will install jsonwebtoken (https://www.npmjs.com/package/jsonwebtoken) and mqtt (https://www.npmjs.com/package/mqtt) from npm. Execute the following command: $ npm install jsonwebtoken mqtt--save Next, we will install rpi-dht-sensor (https://www.npmjs.com/package/rpi-dht-sensor) from npm. This module will help in reading the DHT11 temperature and humidity values: $ npm install rpi-dht-sensor --save Your final package.json file should look similar to the following code snippet: { "name": "Google-IoT-Device", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "jsonwebtoken": "^8.1.1", "mqtt": "^2.15.3", "rpi-dht-sensor": "^0.1.1" } } Now that we have the required dependencies installed, let's continue. Create a new file named index.js at the root of the Google-IoT-Device folder. Next, create a folder named certs at the root of the Google-IoT-Device folder and move the two certificates we created using OpenSSL there. Your final folder structure should look something like this: Open index.js in any text editor and update it as shown here: var fs = require('fs'); var jwt = require('jsonwebtoken'); var mqtt = require('mqtt'); var rpiDhtSensor = require('rpi-dht-sensor'); var dht = new rpiDhtSensor.DHT11(2); // `2` => GPIO2 var projectId = 'pi-iot-project'; var cloudRegion = 'us-central1'; var registryId = 'Pi3-DHT11-Nodes'; var deviceId = 'Pi3-DHT11-Node'; var mqttHost = 'mqtt.googleapis.com'; var mqttPort = 8883; var privateKeyFile = '../certs/rsa_private.pem'; var algorithm = 'RS256'; var messageType = 'state'; // or event var mqttClientId = 'projects/' + projectId + '/locations/' + cloudRegion + '/registries/' + registryId + '/devices/' + deviceId; var mqttTopic = '/devices/' + deviceId + '/' + messageType; var connectionArgs = { host: mqttHost, port: mqttPort, clientId: mqttClientId, username: 'unused', password: createJwt(projectId, privateKeyFile, algorithm), protocol: 'mqtts', secureProtocol: 'TLSv1_2_method' }; console.log('connecting...'); var client = mqtt.connect(connectionArgs); // Subscribe to the /devices/{device-id}/config topic to receive config updates. client.subscribe('/devices/' + deviceId + '/config'); client.on('connect', function(success) { if (success) { console.log('Client connected...'); sendData(); } else { console.log('Client not connected...'); } }); client.on('close', function() { console.log('close'); }); client.on('error', function(err) { console.log('error', err); }); client.on('message', function(topic, message, packet) { console.log(topic, 'message received: ', Buffer.from(message, 'base64').toString('ascii')); }); function createJwt(projectId, privateKeyFile, algorithm) { var token = { 'iat': parseInt(Date.now() / 1000), 'exp': parseInt(Date.now() / 1000) + 86400 * 60, // 1 day 'aud': projectId }; var privateKey = fs.readFileSync(privateKeyFile); return jwt.sign(token, privateKey, { algorithm: algorithm }); } function fetchData() { var readout = dht.read(); var temp = readout.temperature.toFixed(2); var humd = readout.humidity.toFixed(2); return { 'temp': temp, 'humd': humd, 'time': new Date().toISOString().slice(0, 19).replace('T', ' ') // https://stackoverflow.com/a/11150727/1015046 }; } function sendData() { var payload = fetchData(); payload = JSON.stringify(payload); console.log(mqttTopic, ': Publishing message:', payload); client.publish(mqttTopic, payload, { qos: 1 }); console.log('Transmitting in 30 seconds'); setTimeout(sendData, 30000); } In the previous code, we first define the projectId, cloudRegion, registryId, and deviceId based on what we have created. Next, we build the connectionArgs object, using which we are going to connect to Google IoT Core using MQTT-SN. Do note that the password property is a JSON Web Token (JWT), based on the projectId and privateKeyFile algorithm. The token that is created by this function is valid only for one day. After one day, the cloud will refuse connection to this device if the same token is used. The username value is the Common Name (CN) of the certificate we have created, which is unused. Using mqtt.connect(), we are going to connect to the Google IoT Core. And we are subscribing to the device config topic, which can be used to send device configurations when connected. Once the connection is successful, we callsendData() every 30 seconds to send data to the state topic. Save the previous file and run the following command: $ sudo node index.js And we should see something like this: As you can see from the previous Terminal logs, the device first gets connected then starts transmitting the temperature and humidity along with time. We are sending time as well, so we can save it in the BigQuery table and then build a time series chart quite easily. Now, if we head back to the Device page of Google IoT Core and navigate to the Configuration & state history tab, we should see the data that we are sending to the state topic here: Now that the device is sending data, let's actually read the data from another client. Reading the data from the device For this, you can either use the same Raspberry Pi 3 or another computer. I am going to use MacBook as a client that is interested in the data sent by the Thing. Setting up credentials Before we start reading data from Google IoT Core, we have to set up our computer (for example, MacBook) as a trusted device, so our computer can request data. Let's perform the following steps to set the credentials: To do this, we need to create a new Service account key. From the left-hand-side menu of the Google Cloud Console, select APIs & Services | Credentials. Then click on the Create credentials dropdown and select Service account key as shown in the following screenshot: Now, fill in the details as shown in the following screenshot: We have given access to the entire project for this client and as an Owner. Do not select these settings if this is a production application. Click on Create and you will be asked to download and save the file. Do not share this file; this file is as good as giving someone owner-level permissions to all assets of this project. Once the file is downloaded somewhere safe, create an environment variable with the name GOOGLE_APPLICATION_CREDENTIALS and point it to the path of the downloaded file. You can refer to Getting Started with Authentication at https://cloud.google.com/docs/authentication/getting-started if you are facing any difficulties. Setting up subscriptions The data from the device is being sent to Google IoT Core using the state topic. If you recall, we have named that topic dht11. Now, we are going to create a subscription for this topic: From the menu on the left side, select Pub/Sub | Topics. Now, click on New subscription for the dht11 topic, as shown in the following screenshot: Create a new subscription by setting up the options selected in this screenshot: We are going to use the subscription named dht11-data to get the data from the state topic. Setting up the client Now that we have provided the required credentials as well as subscribed to a Pub/Sub topic, we will set up the Pub/Sub client. Follow these steps: Create a folder named test_client inside the test_client directory. Now, run the following command: $ npm init -y Next, install the @google-cloud/pubsub (https://www.npmjs.com/package/@google-cloud/pubsub) module with the help of the following command: $ npm install @google-cloud/pubsub --save Create a file inside the test_client folder named index.js and update it as shown in this code snippet: var PubSub = require('@google-cloud/pubsub'); var projectId = 'pi-iot-project'; var stateSubscriber = 'dht11-data' // Instantiates a client var pubsub = new PubSub({ projectId: projectId, }); var subscription = pubsub.subscription('projects/' + projectId + '/subscriptions/' + stateSubscriber); var messageHandler = function(message) { console.log('Message Begin >>>>>>>>'); console.log('message.connectionId', message.connectionId); console.log('message.attributes', message.attributes); console.log('message.data', Buffer.from(message.data, 'base64').toString('ascii')); console.log('Message End >>>>>>>>>>'); // "Ack" (acknowledge receipt of) the message message.ack(); }; // Listen for new messages subscription.on('message', messageHandler); Update the projectId and stateSubscriber in the previous code. Now, save the file and run the following command: $ node index.js We should see the following output in the console: This way, any client that is interested in the data of this device can use this approach to get the latest data. With this, we conclude the section on posting data to Google IoT Core and fetching the data. In the next section, we are going to work on building a dashboard. Building a dashboard Now that we have seen how a client can read the data from our device on demand, we will move on to building a dashboard, where we display data in real time. For this, we are going to use Google Cloud Functions, Google BigQuery, and Google Data Studio. Google Cloud Functions Cloud Functions are solution for serverless services. Cloud Functions is a lightweight solution for creating standalone and single-purpose functions that respond to cloud events. You can read more about Google Cloud Functions at https://cloud.google.com/functions/. Google BigQuery Google BigQuery is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google's infrastructure. You can read more about Google BigQuery at https://cloud.google.com/bigquery/. Google Data Studio Google Data Studio helps to build dashboards and reports using various data connectors, such as BigQuery or Google Analytics. You can read more about Google Data Studio at https://cloud.google.com/data-studio/. As of April 2018, these three services are still in beta. As we have already seen in the Architecture section, once the data is published on the state topic, we are going to create a cloud function that will get triggered by the data event on the Pub/Sub client. And inside our cloud function, we are going to get a copy of the published data and then insert it into the BigQuery dataset. Once the data is inserted, we are going to use Google Data Studio to create a new report by linking the BigQuery dataset to the input. So, let's get started. Setting up BigQuery The first thing we are going to do is set up BigQuery: From the side menu of the Google Cloud Platform Console, our project page, click on the BigQuery URL and we should be taken to the Google BigQuery home page. Select Create new dataset, as shown in the following screenshot: Create a new dataset with the values illustrated in the following screenshot: Once the dataset is created, click on the plus sign next to the dataset and create an empty table. We are going to name the table dht11_data and we are going have three fields in it, as shown here: Click on the Create Table button to create the table. Now that we have our table ready, we will write a cloud function to insert the incoming data from Pub/Sub into this table. Setting up Google Cloud Function Now, we are going to set up a cloud function that will be triggered by the incoming data: From the Google Cloud Console's left-hand-side menu, select Cloud Functions under Compute. Once you land on the Google Cloud Functions homepage, you will be asked to enable the cloud functions API. Click on Enable API: Once the API is enabled, we will be on the Create function page. Fill in the form as shown here: The Trigger is set to Cloud Pub/Sub topic and we have selected dht11 as the Topic. Under the Source code section; make sure you are in the index.js tab and update it as shown here: var BigQuery = require('@google-cloud/bigquery'); var projectId = 'pi-iot-project'; var bigquery = new BigQuery({ projectId: projectId, }); var datasetName = 'pi3_dht11_dataset'; var tableName = 'dht11_data'; exports.pubsubToBQ = function(event, callback) { var msg = event.data; var data = JSON.parse(Buffer.from(msg.data, 'base64').toString()); // console.log(data); bigquery .dataset(datasetName) .table(tableName) .insert(data) .then(function() { console.log('Inserted rows'); callback(); // task done }) .catch(function(err) { if (err && err.name === 'PartialFailureError') { if (err.errors && err.errors.length > 0) { console.log('Insert errors:'); err.errors.forEach(function(err) { console.error(err); }); } } else { console.error('ERROR:', err); } callback(); // task done }); }; In the previous code, we were using the BigQuery Node.js module to insert data into our BigQuery table. Update projectId, datasetName, and tableName as applicable in the code. Next, click on the package.json tab and update it as shown: { "name": "cloud_function", "version": "0.0.1", "dependencies": { "@google-cloud/bigquery": "^1.0.0" } } Finally, for the Function to execute field, enter pubsubToBQ. pubsubToBQ is the name of the function that has our logic and this function will be called when the data event occurs. Click on the Create button and our function should be deployed in a minute. Running the device Now that the entire setup is done, we will start pumping data into BigQuery: Head back to Raspberry Pi 3 which was sending the DHT11 temperature and humidity data, and run the application. We should see the data being published to the state topic: Now, if we head back to the Cloud Functions page, we should see the requests coming into the cloud function: You can click on VIEW LOGS to view the logs of each function execution: Now, head over to our table in BigQuery and click on the RUN QUERY button; run the query as shown in the following screenshot: Now, all the data that was generated by the DHT11 sensor is timestamped and stored in BigQuery. You can use the Save to Google Sheets button to save this data to Google Sheets and analyze the data there or plot graphs, as shown here: Or we can go one step ahead and use the Google Data Studio to do the same. Google Data Studio reports Now that the data is ready in BigQuery, we are going to set up Google Data Studio and then connect both of them, so we can access the data from BigQuery in Google Data Studio: Navigate to https://datastudio.google.com and log in with your Google account. Once you are on the Home page of Google Data Studio, click on the Blank report template. Make sure you read and agree to the terms and conditions before proceeding. Name the report PI3 DHT11 Sensor Data. Using the Create new data source button, we will create a new data source. Click on Create new data source and we should land on a page where we need to create a new Data Source. From the list of Connectors, select BigQuery; you will be asked to authorize Data Studio to interface with BigQuery, as shown in the following screenshot: Once we authorized, we will be shown our projects and related datasets and tables: Select the dht11_data table and click on Connect. This fetches the metadata of the table as shown here: Set the Aggregation for the temp and humd fields to Max and set the Type for time as Date & Time. Pick Minute (mm) from the sub-list. Click on Add to report and you will be asked to authorize Google Data Studio to read data from the table. Once the data source has been successfully linked, we will create a new time series chart. From the menu, select Insert | Time Series link. Update the data configuration of the chart as shown in the following screenshot: You can play with the styles as per your preference and we should see something similar to the following screenshot: This report can then be shared with any user. With this, we have seen the basic features and implementation process needed to work with Google Cloud IoT Core as well other features of the platform. If you found this post useful, do check out the book,  Enterprise Internet of Things Handbook, to build state of the art IoT applications best-fit for Enterprises. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Five Most Surprising Applications of IoT How IoT is going to change tech teams
Read more
  • 0
  • 17
  • 22584