Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT and Hardware

119 Articles
article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 6879

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 5700

article-image-google-launches-new-products-pixel-3-pixel-slate-google-home-hub
Sugandha Lahoti
10 Oct 2018
4 min read
Save for later

Google launches new products, the Pixel 3 and Pixel 3 XL, Pixel Slate, and Google Home Hub

Sugandha Lahoti
10 Oct 2018
4 min read
Yesterday, Google announced a series of consumer hardware products. This included two new variants of their flagship Pixel smartphones—Pixel 3 and Pixel 3 XL. Also launched was a high-performance tablet, the Pixel Slate and the Google Home Hub. Pixel 3 and Pixel 3 XL The new smartphones from Google, Pixel 3 and Pixel 3 XL come with artificial intelligence features. They can automatically answer calls, click powerful photos,  and provide an enhanced visual and audio experience while charging, powered by the Google Assistant. Source: Google Blog With an integration of Google Lens, Pixel 3, can scan and translate text, find similar styles of clothing, or identify popular plants and animals. It also supports Google’s Smart Compose which suggests phrases in emails to help them draft faster. Pixel 3’s on-device AI can also screen phone calls and avoid spam calls. This feature is first, starting out in English in the U.S. Pixel users in the U.S. will also get a taste of an experimental new Google Assistant feature, powered by Duplex technology. This feature will initially be available later this year in New York, Atlanta, Phoenix, and the San Francisco Bay Area and will roll out to other U.S. cities in the future. Pixel 3 also supports Digital Wellbeing, which is a suite of tools to help users limit the time they spent on their phones. Users can monitor the time spend time on phones and set time limits on specific apps. Digital Wellbeing also comes with a new Wind Down mode to transition display to a grayscale screen in the night. Google Pixel Slate The Google Pixel Slate is a new high-performance tablet in the likes of Google’s popular Pixelbook. Source: Google Blog This tablet is 7mm thin and weighs 1.6 lbs with rounded edges and curved 2.5D glass. It’s Molecular Display packs 293 pixels per inch for the sharpest picture. Pixel Slate includes 8MP cameras on both the rear and front and dual front-firing speakers. It comes with three months of YouTube TV subscription and up to 12 hours of battery life. It’s Pixel Imprint power button doubles as a fingerprint sensor. Pixel Slate is compatible with the Pixel Slate Keyboard, and the Pixelbook Pen. Pixel Slate starts at $599 with several configurations available. Pixel Slate Keyboard is $199, and Pixelbook Pen is $99. Google Home Hub Another addition to their Home series is the Google Home Hub. This home automation device has built-in Google Assistant to traverse Google’s products —Search, YouTube, Google Photos, Calendar, Maps and more. It’s 7” screen features a floating display to naturally fit on any surface. Purposely, Google didn’t put a camera for privacy. Other features include: An Ambient EQ light sensor which allows the screen to automatically adjust to match the lighting in the room. Connection with 10,000+ types of smart home devices from 1,000+ popular brands. With live albums, a new feature from Google Photos, users can view their recent photos even while Google Home Hub is not in use. Google Hub is available for $149 for pre-order from the Google Store. Google Home Hub will also be available by October 22 at Best Buy, Target, Walmart, and other retailers. In the light of their recent Google+ data breach, Google has also mentioned their guiding principle. Per their website, “We respect our users and put them first. We feel a deep responsibility to provide you with a helpful, personal Google experience, and that guides the work we do in three very specific ways: First, we want to provide you with an experience that is unique to you. Just like Google is organizing the world’s information, the combination of AI, software, and hardware can organize your information—and help out with the things you want to get done. The Google Assistant is the best expression of this, and it’s always available when, where, and however you need it. Second, we’re committed to the security of our users. We need to offer simple, powerful ways to safeguard your devices. We’ve integrated Titan™ Security, the system we built for Google, into our new mobile devices. Titan™ Security protects your most sensitive on-device data by securing your lock screen and strengthening disk encryption. Third, we want to make sure you’re in control of your digital wellbeing. From our research, 72 percent of our users are concerned about the amount of time people spend using tech. We take this very seriously and have developed new tools that make people’s lives easier and cut back on distractions.” Read more about the new products on Google Blog. Google announces new Artificial Intelligence features for Google Search on its 20th birthday. Google’s Stories to use artificial intelligence to create stories like Snapchat and Instagram. Google enhances Wear OS design, adds a Google Assistant feed and much more
Read more
  • 0
  • 0
  • 4305

article-image-facebook-ai-powered-video-calling-devices-built-with-privacy-security
Sugandha Lahoti
09 Oct 2018
4 min read
Save for later

Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind”

Sugandha Lahoti
09 Oct 2018
4 min read
Yesterday, Facebook launched two brand new video communication devices. Named Portal and Portal+, these devices let you video call anyone, with more richer, hands-free experiences. The Portal features a 10-inch 1280 x 800 display, while Portal+ features 15-inch 1920 x 1080.  Both devices are powered by Artificial Intelligence. This includes a Smart Camera and a Smart Sound technology. Smart Camera stays with the action and automatically pans and zooms to keep everyone in view. Smart Sound minimizes background noise and enhances the voice of whoever is talking, no matter where they move. Source: Facebook Portal can also be used to call Facebook friends and connections on Messenger even if they don’t have Portal. It also supports group calls of up to seven people at the same time. Portal also offers hands-free voice control with Amazon Alexa built-in which can be used to track sports scores, check the weather, control smart home devices, order groceries, and more.  Facebook has also enabled shared activities in its Portal devices by partnering with Spotify Premium, Pandora, iHeartRadio, Facebook Watch, Food Network, and Newsy. Keeping in mind, it’s security breach that affected 50 million users two weeks ago, Facebook says it has paid a lot of attention to privacy and security features. Per their website, “We designed Portal with tools that give you control: You can completely disable the camera and microphone with a single tap. Portal and Portal+ also come with a camera cover, so you can easily block your camera’s lens at any time and still receive incoming calls and notifications, plus use voice commands. To manage Portal access within your home, you can set a four- to 12-digit passcode to keep the screen locked. Changing the passcode requires your Facebook password. We also want to be upfront about what information Portal collects, help people understand how Facebook will use that information and explain the steps we take to keep it private and secure: Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. In addition, video calls on Portal are encrypted. For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are. Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, “Hey Portal.” You can delete your Portal’s voice history in your Facebook Activity Log at any time.” In all the above, Facebook seems quite cryptic about audio data. It also doesn’t really explain how it will use the information it collects from users. The voice data is stored on the Facebook server by default, probably to improve the Portal’s understanding on the user’s language quirks and to understand the user’s needs from the data. But it does make one wonder, should this be an opt-in and not an opt-out by default? Another jarring aspect is the need for one’s Facebook password to change the device’s passcode. This just feels like the new devices are yet another way for Facebook to add users to Facebook, not to mention the fact that Facebook just had a data breach on its site, the repercussions of which they are still investigating. In an interesting poll conducted by Dr. Jen Golbeck, Professor at UMD, on Twitter, over 63% of respondents said that they will not trust Facebook to responsibly operate a surveillance device in their home. https://twitter.com/jengolbeck/status/1049343277110054912 Read more about the devices on Facebook’s announcement. Facebook Dating app to release as a test version in Colombia. Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others How Facebook is advancing artificial intelligence [Video]
Read more
  • 0
  • 0
  • 5008

article-image-arm-releases-free-cortex-m-processor-cores-for-fpgas-includes-measures-to-combat-fossi-threat
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat

Melisha Dsouza
03 Oct 2018
3 min read
At the Xilinx Developer Forum in San Jose, Arm announced its collaboration with Xilinx, the market leader in FPGAs. The collaboration plans to bring the benefits of Arm Cortex-M processors to FPGA through the Arm DesignStart program, thus providing scalability and a standardized processor architecture across the Xilinx portfolio. Users can expect a fast, completely no-cost access to soft processor IP, while taking advantage of the easy design integration with Xilinx tools and comprehensive software development solutions to accelerate success on FPGA. These processors will enable embedded developers to design and innovate confidently, while benefiting from simplified software development and superior code density. In addition, products can be easily scaled on these processors, thanks to the support of the broadest technology ecosystem of software, tools, and services provided by the team. Arm for FPGA comes with the following benefits: #1 Maximum choice and flexibility Users can obtain an easy and instant access to Cortex-M1 and Cortex-M3 soft processor IP for FPGA integration with Xilinx products. They will not be charged any license fee or royalties for this access. #2 Reduced software costs The processor focuses on reducing software costs while obtaining maximum reuse of software across an entire OEM’s product portfolio on a standardized CPU architecture, scaling from single board computers through to FPGAs. #3 Ease of design The team has ensured an easy integration with Xilinx system and peripheral IP through Vivado Design Suite. They use a drag-and-drop design approach to create FPGA systems with Cortex-M processors. The extensive software ecosystem and knowledge base of others designing on Arm, will ultimately result in reducing the time to market for these processors #4 Measures to Combat FOSSi (free and open source silicon) threat Arm Cortex-M1 includes a mandatory license agreement, which contains clauses against reverse-engineering. The clause also prevents the use of these cores for comparative benchmarking. These clauses will assist Arm to enable IP holds up against the latest and greatest FOSSi equivalents (like RISC-V) when running on the same FPGAs. This is not the first time that Arm has raised its voice against FOSSi threats. Earlier this year, they had also launched an aggressive marketing campaign specifically targeting RISC-V. The Arm and Xilinx collaboration will enable developers to take advantage of the benefits of heterogeneous computing on a single processor architecture. To know more about this news, head over to Arm’s official blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and Arm chipsets Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo
Read more
  • 0
  • 0
  • 3894

article-image-a-libre-gpu-effort-based-on-risc-v-rust-llvm-and-vulkan-by-the-developer-of-an-earth-friendly-computer
Prasad Ramesh
02 Oct 2018
2 min read
Save for later

A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer

Prasad Ramesh
02 Oct 2018
2 min read
An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer. The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor. The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language. The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand. The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.” This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism". It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet. To know more about this project, visit the libre risc-v website. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 9176
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-california-passes-the-u-s-first-iot-security-bill
Prasad Ramesh
25 Sep 2018
3 min read
Save for later

California passes the U.S.' first IoT security bill

Prasad Ramesh
25 Sep 2018
3 min read
California likes to be leading the way when it comes to digital regulation. Just a few weeks ago it passed legislation that looks like it could restore net neutrality. Now, a bill designed to tighten IoT security, is with the governor awaiting signature for it to be carried into California state law. The bill, SB-327 Information privacy: connected devices, was initially introduced in February 2017 by Senator Jackson. It was the first legislation of its kind in the US. Approved at the end of August, it will come into effect at the start of 2020 once signed by Governor Jerry Brown. Read next: IoT Forensics: Security in an always connected world where things talk What California’s IoT bill states The new IoT security bill covers another of important areas. For example, for manufacturers, IoT devices will need to contain certain safety and security features: Security should be appropriate to the nature and function of the device. The feature should be appropriate to the information an IoT may collect, contain, or transmit. It should be designed to protect the device and information within it from unauthorized access, destruction, use, modification, or disclosure. If an IoT device is requires authentication over the internet, further conditions need to be met, such as: The preset password must be unique to each device that is manufactured. The device must ask the user to generate a new authentication method before being able to use it for the first time. It’s worth noting that the points mentioned above for IoT security are not applicable to IoT devices that are subject to security requirements under federal law. Also a covered entity like a health care provider, business associate, contractor, or employer subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) or the Confidentiality of Medical Information Act is exempt from the title points mentioned. The IoT is a network of several of devices that connect to the internet via Wi-Fi. They are not openly visible as most of them are used in a local network but often do not have many security measures. The bill doesn't have any exact definitions for a ‘reasonable security feature’ but provides a few guiding points in interest a user’s security. The legislation states: “This bill, beginning on January 1, 2020, would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure, as specified.” Criticisms of the IoT bill Some cybersecurity experts have criticised the legislation. For example, Robert Graham writes on his Security Errarta blog that the bill is “based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.” He explains that “the point [of good cybersecurity practice] is not to add ‘security features’ but to remove ‘insecure features’.” Graham’s criticisms underline that while the legislation might be well-intentioned, whether it will be impactful remains another matter. This is, at the very least, a step in the right direction by a state that is keen to take digital security and freedom into its own hands. You can read the bill at the California Legislative information website. How Blockchain can level up IoT Security Defending your business from the next wave of cyberwar: IoT Threats
Read more
  • 0
  • 0
  • 6845

article-image-these-robot-jellyfish-are-on-a-mission-to-explore-and-guard-the-oceans
Bhagyashree R
24 Sep 2018
3 min read
Save for later

These robot jellyfish are on a mission to explore and guard the oceans

Bhagyashree R
24 Sep 2018
3 min read
Earlier last week, a team of US scientists, from Florida Atlantic University (FAU) and the US Office of Naval Research published a paper on five jellyfish robots that they have manufactured. The paper is titled Thrust force characterization of free-swimming soft robotic jellyfish. The prime motive of the scientists to build such robotic jellyfish is to track and monitor fragile marine ecosystems without causing unintentional damage to them. These soft robots can swim through openings narrower than their bodies and are powered by hydraulic silicon tentacles. These so-called ‘jelly-bots’ have the ability to squeeze through narrow openings using circular holes cut in a plexiglass plate. The design structure of ‘Jelly-bots’ Jelly-bots have a similar design to that of a moon jellyfish (Aurelia aurita) during the ephyra stage of its life cycle before they becoming a fully grown medusa. To avoid the damage to fragile biological systems by the robots, soft hydraulic network actuators are chosen. To allow the jellyfish to steer, the team uses two impeller pumps to inflate the eight tentacles. The mold models for the jellyfish robot were designed in SolidWorks and subsequently, 3D printed with an Ultimaker 2 out of PLA (polylactic acid). Each jellyfish has varying rubber hardness to test the effect it has on the propulsion efficiency. Source: IOPScience What this study was about? These jelly robots will help the scientists in determining the impact of the following factors on the measured thrust force: Actuator material Shore hardness Actuation frequency Tentacle stroke actuation amplitude The scientists found that all three of these factors significantly impact mean thrust force generation, which peaks with a half-stroke actuation amplitude at a frequency of 0.8 Hz. Results The material composition of the actuators significantly impacted the measured force produced by the jellyfish, as did the actuation frequency and stroke amplitude. The greatest forces were measured with a half-stroke amplitude at 0.8 Hz and a tentacle actuator-flap material Shore hardness composition of 30–30. In the test, the jellyfish was able to swim through the narrow openings than the nominal diameter of the robot and demonstrated the ability to swim directionally. The jellyfish robots were tested in the ocean and have the potential to monitor and explore delicate ecosystems without inadvertently damaging them. One of the scientists, Dr. Engeberg said to Tech Xplore: "In the future, we plan to incorporate environmental sensors like sonar into the robot's control algorithm, along with a navigational algorithm. This will enable it to find gaps and determine if it can swim through them." To know more in detail about the jellybots, read the research paper published by these scientists. You may also go through a  video showing jellybots functioning in deep waters. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms MEPs pass a resolution to ban “Killer robots” 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 5423

article-image-amazon-devices-echo-device-lineup-alexa-presentation-language
Sugandha Lahoti
21 Sep 2018
4 min read
Save for later

It’s Day 1 for Amazon Devices: Amazon expands its Echo device lineup, previews Alexa Presentation Language and more

Sugandha Lahoti
21 Sep 2018
4 min read
Amazon has unveiled a range of Echo devices at the Amazon Devices Event hosted in their Seattle headquarters, yesterday. The products announced included a revamped selection of Amazon’s smart speakers ( Echo Sub, Echo Dot, and Echo Plus), smart displays (the Echo Show and Echo Spot), and other smart devices. Also released, was a smart microwave (AmazonBasics Microwave), Echo Wall Clock, Fire TV Recast, and Amazon Smart Plug This event marks the largest number of devices and features (over 30) that Amazon has ever launched in a day. Alexa Presentation Language For developers, Amazon introduced the Alexa Presentation Language, to easily create Alexa skills for Alexa devices with screens. The Alexa Presentation Language (APL) is in preview and allows developers to build voice experiences with graphics, images, slideshows and video. Developers will be able to control how graphics flow with voice, customize visuals and adapt them to Alexa devices and skills.  Supported devices will include Echo Show, Echo Spot, Fire TV, and select Fire Tablet devices. Now let’s take a broad look at the key device announcements. Amazon Smart Speakers Echo Dot: The new version of the Smart speaker now offers 70 percent louder sound as compared to its predecessor. It is a voice-controlled smart speaker with Alexa integration. It can sort music, news, information, and more. The driver is now much larger from 1.1” to 1.6” for better sound clarity and improved bass. It is Bluetooth enabled so you can connect to another speaker or use it all by itself. Echo Input:  If you already have speakers, this device can add Alexa voice control to them via a 3.5mm audio cable or Bluetooth. It has a four-microphone array. Echo Input is just 12.5mm tall and thin enough to disappear into the room. It will be available later this year for $34.99. Echo Plus: Echo Plus combines Amazon’s cloud-based Natural Language Understanding and Automatic Speech Recognition along with built-in Zigbee hub to make it one of the premier smart speakers. It also has a new fabric casing, and built-in temperature sensor. This model's pre-orders begin today for $149.99. Echo Link: The Echo Link device can connect to a receiver or amplifier, with multiple digital and analog inputs and outputs for compatibility with your existing stereo equipment. It can control music selection, volume, and multi-room playback on your stereo with your Echo or the Alexa app. Echo Link will be available to customers soon. Echo Sub: This 100-watt subwoofer can connect to other speakers and create a 2.1-sound solution. The $129.99 Echo Sub will launch later this month with pre-orders beginning today. Amazon Smart Displays Echo Show: The new Echo Show is completely redesigned with a larger screen, smart home hub, and improved sound quality. Amazon is also introducing Doorbell Chime Announcements, so users will hear a chime on all Echo devices when someone presses your smart doorbell. Echo Show includes a high resolution 10-inch HD display and an 8-mic array. The new Echo Show will be available to customers for $229.99. Shipping starts next month. Other Smart devices Echo Wall Clock: It is a $30 Echo companion device, an analog clock with Alexa-powered voice recognition. It is 10-inch, battery-powered and features a ring of 60 LEDs around the rim that show ongoing Alexa timers. It also has automatic time syncing and Daylight Savings Time adjustment. AmazonBasics Microwave: It’s a $59.99 voice-activated microwave. It features Dash Replenishment and an array of Alexa features including integration with connected ovens, door locks, and other smart fixtures, reminders, and access to more than 50,000 third-party skills. Fire TV Recast: This is a companion DVR that lets users watch, record, and replay free over-the-air programming to any Fire TV, Echo Show, and on compatible Fire tablet and mobile devices. Users can also record up to two or four shows at once, and stream on any two devices at a time. It can also be paired with Alexa. Amazon Smart Plug: The Amazon Smart Plug works with Alexa to add voice control to any outlet. You can schedule lights, fans, and appliances to turn on and off automatically, or control them remotely when you’re away. Follow along the live blog of the event for a minute to minute update. Google to allegedly launch a new Smart home device. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration. The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically.
Read more
  • 0
  • 0
  • 5217

article-image-hybrid-nanomembranes-make-conformal-wearable-sensors-possible-demo-south-korean-researchers-with-imperceptible-loudspeakers-and-mics
Natasha Mathur
20 Sep 2018
4 min read
Save for later

Hybrid nanomembranes make conformal wearable sensors possible, demo South Korean researchers with imperceptible loudspeakers and mics

Natasha Mathur
20 Sep 2018
4 min read
A team of researchers from Ulsan National Institute of Science and Technology (UNIST) in South Korea has developed an ultrathin, and transparent wearable device that is capable of turning your skin into a loudspeaker. The device has been created to help the hearing and speech impaired people. However, it has potential applications in other domains such as wearable IoT sensors, and healthcare devices.                                                Skin-attachable NM  loudspeaker This new device is created with conductive hybrid nanomembranes (NMs) with nanoscale thickness, comprising an orthogonal silver nanowire array embedded in a polymer matrix. This helps substantially enhance the electrical as well as mechanical properties of ultrathin polymer NMs. There is no loss in the optical transparency because of the orthogonal array structure. “Here, we introduce ultrathin, conductive, and transparent hybrid NMs that can be applied to the fabrication of skin-attachable NM loudspeakers and microphones, which would be unobtrusive in appearance because of their excellent transparency and conformal contact capability” as mentioned in the research paper. Hybrid NMs help significantly enhance the electrical and mechanical properties of ultrathin polymer NMs, which can then be intimately attached to the human skin. After this, the nanomembrane is used as a loudspeaker which can be attached to almost anything to produce sounds. The researchers also introduced a similar device, which acts as a microphone that can be connected to smartphones and computers for unlocking voice-activated security systems. Skin-attachable and transparent NM loudspeaker The researchers fabricated a skin-attachable loudspeaker using hybrid NMs. This speaker is capable of emitting thermoacoustic sound with the help of temperature-induced oscillation of the surrounding air. This temperature oscillation is caused by Joule heating of the orthogonal AgNW array upon the application of an AC voltage. The sound emitted from the NM loudspeaker is then analyzed with the help of an acoustic measurement system. “We used a commercial microphone to collect and record the sound produced by the loudspeaker. To characterize the sound generation of the loudspeaker, we confirmed that the sound pressure level (SPL) of the output sound increases linearly as the distance between the microphone and the loudspeaker decreases” reads the research paper. Wearable and transparent NM microphone The researchers also designed a wearable and transparent microphone using hybrid NMs combined with micropatterned PDMS (NM microphone). This microphone is capable of detecting sound and recognizing a human voice. These wearable microphones are sensors, which are attached to a speaker's neck for sensing the vibration of the vocal folds.                                        Skin-attachable NM Microphone The skin-attachable NM microphone comprises a hybrid NM mounted to a micro pyramid-patterned polydimethylsiloxane (PDMS) film. This sandwich-like structure helps precisely detect the sound and vibration of the vocal cords by the generation of a triboelectric voltage. The triboelectric voltage results from the coupling effect of the contact electrification as well as electrostatic induction. This sensor works by converting the frictional force that is generated by the oscillation of the transparent conductive nanofiber into electric energy. The sensitivity of the NM microphone in response to sound emissions is evaluated by fabricating two device structures, such as a freestanding hybrid NM, integrated with a holey PDMS film (NM microphone), and another fully adhered to a planar PDMS film without a hole. “As a proof-of-concept demonstration, our NM microphone was applied to a personal voice security system requiring voice-based identification applications. The NM microphone was able to accurately recognize a user’s voice and authorize access to the system by the registrant only” reads the research paper.   For more details, check out the official research paper. Now Deep reinforcement learning can optimize SQL Join Queries, says UC Berkeley researchers MIT’s Transparency by Design Network: A high-performance model that uses visual reasoning for machine interpretability Swarm AI that enables swarms of radiologists, outperforms specialists or AI alone in predicting Pneumonia
Read more
  • 0
  • 0
  • 3502
article-image-google-to-allegedly-launch-a-new-smart-home-device
Guest Contributor
20 Sep 2018
2 min read
Save for later

Google to allegedly launch a new Smart home device

Guest Contributor
20 Sep 2018
2 min read
In the midst of all the leaks related to Pixel 3 and Pixel 3 XL regarding whether Google will embrace iPhone like notch or will have wireless charging, reports have surfaced that Google has even more news to showcase in its big hardware event “Made By Google” on October 9. According to a report from MySmartPrice, Google might launch a new device called "Google Home Hub" Smart Speaker sporting a 7-inch display with large squarish speakers in two variants Chalk white and Charcoal. Image source mysmartprice Google has been pretty successful with its smart home devices like Google Home series but after Amazon teased its smart home device with screen called 'Amazon Echo Show' Tech giant was keen to work on a product to compete with its rival. If the leaked news from "MySmartPrice" is to be believed, with Google Home Hub powered by Google assistant we can watch YouTube, HBO, and videos from other content providers. Additionally, the device will also display time, weather, daily commute information and other regular Google assistant features.  However, it will not have full-fledged Android OS. While the device comes power packed with the Google software but based on leaks, what seems to be missing from the device is the camera. It would have been perfect if the device sported a camera as well which could have been used for video calling as Google is aggressively marketing its video calling app Google Duo. The device will, however, feature  WiFi and Bluetooth. Image source: mysmartprice With the new device, Google might also introduce new features for the Google assistant. Though there is no confirmation from Google regarding the product yet but the timing makes perfect sense as Google's upcoming event on October 9th would be the perfect place to announce a Google Home Hub along with its much awaited Pixel smartphone series. Read full article on Mysmartprice. Author Bio Full time Linux Admin part time reader, always up for latest technology and a cup of tea, interested in Cloud services, Machine learning and Artificial Intelligence. Amazon Echo vs Google Home: Next-gen IoT war. Home Assistant: an open source Python home automation hub to rule all things smart. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration.
Read more
  • 0
  • 0
  • 5237

Prasad Ramesh
12 Sep 2018
2 min read
Save for later

The new Bolt robot from Sphero wants to teach kids programming

Prasad Ramesh
12 Sep 2018
2 min read
Sphero, a robotic toy building company announced their latest Bolt robotic ball aimed at teaching kids basic programming. It has advanced sensors, an LED matrix, and infrared sensors to communicate with other Bolt robots. The robot itself is 73mm in diameter. There’s an 8x8 LED matrix inside a transparent casing shell. This matrix displays helpful prompts like a lightning bolt when Bolt is charging. Users can fully program the LED matrix to display a wide variety of icons connected to certain actions. This can be a smiley face when a program is completed, sad face on failure or arrow marks for direction changes. The new Bolt has a longer battery life of around two hours, charges back up in six hours. It connects to the Sphero Edu app to use community created activities, or even to build your own analyze sensor data etc. The casing is now transparent instead of the opaque colored ones from previous Sphero balls. The sphere weighs 200g in all and houses infrared sensors that allow the Bolt to detect other nearby Bolts to interact with. Users can program specific interactions between multiple Bolts. The Edu app supports coding through drawing on the screen or via Scratch blocks. You can also use JavaScript to program the robot to create custom games and drawing. There are sensors to track speed, acceleration, and direction, or to drive BOLT. This can be done without having to aim since the Bolt has a compass. There is also an ambient light sensor that allows programming the Bolt based on the room’s brightness. Other than education, you can also simply drive BOLT and play games with the Sphero Play app. Source: Sphero website It sounds like a useful little robot and is available now to consumers for $149.99. Educators can also buy BOLT in 15-packs for classroom learning. For more details, visit the Sphero website. Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out. How to assemble a DIY selfie drone with Arduino and ESP8266 ROS Melodic Morenia released
Read more
  • 0
  • 0
  • 4879

article-image-is-ros-2-0-good-enough-to-build-real-time-robotic-applications-spanish-researchers-find-out
Prasad Ramesh
11 Sep 2018
4 min read
Save for later

Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out.

Prasad Ramesh
11 Sep 2018
4 min read
Last Friday, a group of Spanish researchers have published a research paper titled ‘Towards a distributed and real-time framework for robots: evaluation of ROS 2.0 communications for real-time robotic applications’. This paper talks about an experimental setup exploring the suitability of ROS 2.0 for real-time robotic applications. In this paper, ROS 2.0 communications is evaluated in a robotic inter-component communication hardware case running on top of Linux. The researchers have benchmarked and studied the worst case latencies and characterized ROS 2.0 communications for real-time applications. The results indicate that a proper real-time configuration of the ROS 2.0 framework reduces jitter making soft real-time communications possible but there were also some limitations that prevented hard real-time communications. What is ROS? ROS is a popular framework that provides services for the development of robotic applications. It has utilities like a communication infrastructure, drivers for a variety of software and hardware components, libraries for diagnostics, navigation, manipulation, and other things. ROS simplifies the process of creating complex and robust robot behavior across many robotic platforms. ROS 2.0 is the new version which extends the concepts of the first version. Data Distribution Service (DDS) middleware is used in ROS 2.0 due to its characteristics and benefits as compared to other solutions. Need for real-time applications in robotic systems In all robotic systems, tasks need to be time responsive. While moving at a certain speed, robots must be able to detect an obstacle and stop to avoid collision. These robot systems often have timing requirements to execute tasks or exchange data. By not meeting the timing requirements, the system behavior will degrade or the system will fail. With ROS being the standard software infrastructure for robotic applications development, demands rose in the ROS community to include real-time capabilities. Hence, ROS 2.0 was created for delivering real-time performance. But to deliver a complete, distributed and real-time solution for robots, ROS 2.0 needs to be surrounded with appropriate elements. These elements are described in the papers Time-sensitive networking for robotics and Real-time Linux communications: an evaluation of the Linux communication stack for real-time robotic applications. ROS 2 uses DDS as its communication middleware. DDS contains Quality of Service (QoS) parameters which can be configured and tuned for real-time applications. The results of the experiment In the research paper, a setup was made to measure the real-time performance of ROS 2.0 communications over Ethernet in a PREEMPT-RT patched kernel. The end-to-end latencies between two ROS 2.0 nodes in different machines was measured. A Linux PC and an embedded device which could represent a robot controller (RC) and a robot component (C) were used for the setup. An overview of the setup can be seen as follows: Source: LinkedIn Some of the results are as follows: Source: LinkedIn The image describes the Impact of RT settings under different system load. They are a) System without additional load without RT settings. b) is system under load without RT settings. c) is system without additional load and RT settings. d) is system under load and RT settings. The results from the experiment showed that a proper real-time configuration of the ROS 2.0 framework and DDS threads greatly reduces the jitter andworst-casee latencies. This mean a smooth and fast communication. However, there were also some limitations when there is noncritical traffic in the Linux Network Stack is in picture. By configuring the network interrupt threads and using Linux traffic control QoS methods, some of the problems could be avoided. The researchers conclude that it is possible to achieve soft real-time communications with mixed-critical traffic using the Linux Network stack. However hard real-time is not possible due to the aforementioned limitations. For a more detailed understanding of the experiments and results, you can read the research paper. Shadow Robot joins Avatar X program to bring real-world avatars into space 6 powerful microbots developed by researchers around the world Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 5867
article-image-shadow-robot-joins-avatar-x-program-to-bring-real-world-avatars-into-space
Savia Lobo
07 Sep 2018
2 min read
Save for later

Shadow Robot joins Avatar X program to bring real-world avatars into space

Savia Lobo
07 Sep 2018
2 min read
Shadow Robots Company, experts at grasping and manipulation for robotic hands announced that they are joining a new space avatar program named AVATAR X. This program is led by ANA HOLDINGS INC. (ANA HD) and Japan Aerospace Exploration Agency (JAXA). AVATAR X aims to accelerate the integration of technologies such as robotics, haptics and Artificial Intelligence (AI), to enable humans to remotely build camps on the Moon, support long-term space missions, and further explore space from Earth. In order to make this possible, Shadow will work closely with the programme’s partners, leveraging its unique teleoperation system that it has already developed and that is also available to purchase. AVATAR X is all set to be launched as a multi-phase programme. It aims to revolutionize space development and make living on the Moon, Mars and beyond, a reality. What will AVATAR X program include? AVATAR X program will comprise of clever elements including Shadow’s Dexterous Hand, which can be controlled by a CyberGlove worn by the operator. This hand will be attached to a UR10 robot arm controllable by a PhaseSpace motion capture tool worn on the operator’s wrist. Both the CyberGlove and Motion Capture wrist tool have mapping capability so that the Dexterous Hand and the robot arm can mimic an operator’s movements. The new system allows remote control of robotic technologies while providing distance and safety. Furthermore, Shadow uses an open source platform providing full access to the code to help users develop the software for their own specific needs. Shadow’s Managing Director, Rich Walker says “We’re really excited to be working with ANA HD and JAXA on the AVATAR X programme and it gives us the perfect opportunity to demonstrate how our robotics technology can be leveraged for avatar or teleoperation scenarios away from UK soil, deep into space. We want everyone to feel involved at such a transformative time in teleoperation capabilities and encourage all those interested to enter the AVATAR XPRIZE competition.” To know more about AVATAR X in detail, visit ANA Group’s press release. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making  
Read more
  • 0
  • 0
  • 5248

article-image-the-irobot-roomba-i7-is-a-cleaning-robot-that-maps-and-stores-your-house-and-also-empties-the-trash-automatically
Prasad Ramesh
07 Sep 2018
2 min read
Save for later

The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically.

Prasad Ramesh
07 Sep 2018
2 min read
iRobot, the intelligent robot making company revealed its latest robot vacuum, the Roomba i7+ yesterday. It is a successor to the Roomba 980 which was launched in 2015. The i7+ has two key changes - it stores a map of your house and empties the trash itself. https://www.youtube.com/watch?v=HPgxcETuqzI Weighing about 7.4lbs, the Roomba i7+ is designed to be easier to manage than the previous models. The new charging base houses a larger trash bin for automatic emptying. The stationary base automatically sucks the debris out of the Roomba into the bag. The base has the capacity to hold dirt of 30 cleanings. This would mean you’ll have to empty the bigger trash bag only once a month, depending on your cleaning needs. The i7+ works with two rubber brushes, one to loosen up the dirt and another to lift and collect it. The large bag in the base traps dust so that it can’t escape. It works on iAdapt® 3.0 Navigation with vSLAM® technology both of which are patented. They allow the robot to map its surroundings and clean sections of your home systematically. It creates visual landmarks to keep track of areas it has cleaned and areas pending to clean. Source: iRobot The i7+ too like the older models connect to the iRobot Home app and can sync with virtual assistants like Alexa to schedule cleanings. Like the previous 900 series, the i7+ too maps your house, the difference being that the newer model stores the map for automatic navigation later. You can use the app to differentiate and name different rooms and control the cleaning frequency. With an assistant, you can use voice commands to clean specific rooms. The i7+ will be available in stores from October. The price tag of $949 may not appeal to everyone, but if you want your house to be cleaned automatically, this is something to consider. There is also a lower- priced model, the i7 with a price tag of $699. This version does not have a self-cleaning base or mapping features, but it can be controlled by Wifi or an assistant. You can pre-order the latest Roomba i7+ from the iRobot website. Home Assistant: an open source Python home automation hub to rule all things smart How Rolls Royce is applying AI and robotics for smart engine maintenance 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 5208