Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Robotics

15 Articles
article-image-worlds-first-touch-transmitting-telerobotic-hand-debuts-at-amazon-remars-tech-showcase
Fatema Patrawala
04 Jun 2019
3 min read
Save for later

World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase

Fatema Patrawala
04 Jun 2019
3 min read
Ana Holdings Inc., HaptX, SynTouch, and Shadow Robot Company are set to unveil the next generation of robotics technology at the upcoming Amazon Re:Mars Expo. The Amazon Re:Mars will be held in Las Vegas from June 4th and 7th. By incorporating the latest tech from across the field of robotics, they have invented the teleoperation and telepresence system. This system will feature the first robotic hand to successfully transmit touch sensations. This technology is being hailed as the ‘Holy Grail of robotics’ by the inventors. They have combined Shadow Robot’s world-leading dexterous robotic hand with SynTouch’s biomimetic tactile sensors and HaptX’s realistic haptic feedback gloves. This new technology enables unprecedented precision remote-control of a robotic hand. They have also implemented tests on it and in a recent test, a human operator in California was able to operate a computer keyboard in London, with each keystroke detected through fingertip sensors on their glove and faithfully relayed 5000 miles to the Dexterous Hand to recreate. They also promise that combining touch with teleoperation in this way is ground-breaking for future applications to perform actions at a distance, e.g. bomb disposal, deep-sea engineering or even surgery performed across different states. At the Amazon re:MARS Tech Showcase, the trailblazing team will demonstrate their teleoperation and telepresence technology outside the lab for the first time. Check out this video to understand how this technology will function. Kevin Kajitani, Co-Director of ANA HOLDINGS INC., Avatar Division says, “We are only beginning to scratch the surface of what is possible with these advanced Avatar systems and through telerobotics in general. In addition to sponsoring the $10M ANA Avatar XPRIZE, we’ve approached our three partner companies to seek solutions that will allow us to develop a high performance, intuitive, general-purpose Avatar hand. We believe that this technology will be key in helping humanity connect across vast distances.” Jake Rubin, Founder and CEO of HaptX says, “Our sense of touch is a critical component of virtually every interaction. The collaboration between HaptX, Shadow Robot Company, SynTouch, and ANA brings a nat-ural and realistic sense of touch to robotic manipulation for the first time, eliminating one of the last bar-riers to true telepresence.” Dr. Jeremy Fishel, Co-Founder of SynTouch says, “We’ve got something exciting up our sleeves for re:MARS this year. Users will see just how essential the sense of touch is when it comes to dexterity and manipulation and the various applications it can have within industry.” Rich Walker, Managing Director of the Shadow Robot Company says, “Our remotely controlled system can help transform work within risky environments such as nuclear decommissioning and we’re already in talks with the UK nuclear establishment regarding the application of this advanced technology. It adds a layer of safety between the worker and the radiation zone as well as increasing precision and accuracy within glovebox-related tasks.” Paul Cutsinger, Head of Voice Design Education at Amazon Alexa says, “re:MARS embraces an optimistic vision for scientific discovery to advance a golden age of innovation and this teleoperation technology by the Shadow Robot Company, SynTouch and HaptX more than fits the bill. It must be seen.” Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Artist Holly Herndon releases an album featuring an artificial intelligence ‘musician’ Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry
Read more
  • 0
  • 0
  • 2841

article-image-researchers-at-uc-berkeleys-robot-learning-lab-introduce-blue-a-new-low-cost-force-controlled-robot-arm
Bhagyashree R
18 Apr 2019
2 min read
Save for later

Researchers at UC Berkeley's Robot Learning Lab introduce Blue, a new low-cost force-controlled robot arm

Bhagyashree R
18 Apr 2019
2 min read
Yesterday, a team of researchers from UC Berkeley's Robot Learning Lab announced the completion of their three-year-long project called Blue. It is a low-cost, high-performance robot arm that was built to work in real-world environments such as warehouses, homes, hospitals, and urban landscapes. https://www.youtube.com/watch?v=KZ88hPgrZzs&feature=youtu.be With Blue, the researchers aimed to significantly accelerate research towards useful home robots. Blue is capable of mimicking human motions in real-world environments and enables more intuitive teleoperation. Pieter Abbeel, the director of the Berkeley Robot Learning Lab and co-founder and chief scientist of AI startup Covariant, shared the vision behind this project, “AI has been moving very fast, and existing robots are getting smarter in some ways on the software side, but the hardware’s not changing. Everybody’s using the same hardware that they’ve been using for many years . . . We figured there must be an opportunity to come up with a new design that is better for the AI era. Blue design details Its dynamic properties meet or exceed the needs of a human operator, for instance, the robot has a nominal position-control bandwidth of 7.5 Hz and repeatability within 4mm. It is a kinematically-anthropomorphic robot arm with a 2 KG payload and can cost less than $5000. It consists of 7 Degree of Freedom, which includes 3 in the shoulder, 1 in the elbow, and 3 in the wrist. Blue has quasi-direct drive (QDD) actuators, which offer better force control, selectable impedance, and are highly backdrivable. These actuators make Blue resilient to damage and also makes it safer for humans to be around. The team is first distributing early release arms to developers and industry partners. We can see a product release within the next six months. The team is also planning to have a production edition of the Blue robot arm, which will be available by 2020. To read more on Blue, check out the Berkley Open Arms site. Walmart to deploy thousands of robots in its 5000 stores across US Boston Dynamics’ latest version of Handle, robot designed for logistics Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial]
Read more
  • 0
  • 0
  • 3122

article-image-walmart-deploy-thousands-of-robots-in-5000-stores-across-us
Fatema Patrawala
12 Apr 2019
4 min read
Save for later

Walmart to deploy thousands of robots in its 5000 stores across US

Fatema Patrawala
12 Apr 2019
4 min read
Walmart, the world’s largest retailer following the latest tech trend is going all in on robots. It plans to deploy thousands of robots for lower level jobs in its 5000 of 11, 348 stores in US. In a statement released on its blog on Tuesday, the retail giant said that it was unleashing a number of technological innovations, including autonomous floor cleaners, shelf-scanners, conveyor belts, and "pickup towers" on stores across the United States. Elizabeth Walker from Walmart Corporate Affairs says, “Every hero needs a sidekick, and some of the best have been automated. Smart assistants have huge potential to make busy stores run more smoothly, so Walmart has been pioneering new technologies to minimize the time an associate spends on the more mundane and repetitive tasks like cleaning floors or checking inventory on a shelf. This gives associates more of an opportunity to do what they’re uniquely qualified for: serve customers face-to-face on the sales floor.” Further Walmart announced that it would be adding 1,500 new floor cleaners, 300 more shelf-scanners, 1,200 conveyor belts, and 900 new pickup towers. It has been tested in dozens of markets and hundreds of stores to prove the effectiveness of the robots. Also, the idea of replacing people with machines for certain job roles will reduce costs for Walmart. Perhaps if you are not hiring people, they can't quit, demand a living wage, take sick days off etc resulting in better margins and efficiencies. According to Walmart CEO Doug McMillon, “Automating certain tasks gives associates more time to do work they find fulfilling and to interact with customers. Continuing this logic, the retailer points to robots as a source of greater efficiency, increased sales and reduced employee turnover.” "Our associates immediately understood the opportunity for the new technology to free them up from focusing on tasks that are repeatable, predictable and manual," John Crecelius, senior vice president of central operations for Walmart US, said in an interview with BBC Insider. "It allows them time to focus more on selling merchandise and serving customers, which they tell us have always been the most exciting parts of working in retail." With the war for talent raging on in the world of retail and the demand for minimum wage hikes a frequent occurrence, Walmart's expanding robot army is a signal that the company is committed to keeping labor costs down. Does that mean at the cost of cutting jobs or employee restructuring? Walmart has not specified what number of jobs it will cut as a result of this move. But when automation takes place and at the largest retailer in the US is Walmart, significant job losses can be expected to hit. https://twitter.com/NoelSharkey/status/1116241378600730626 Early last year, Bloomberg reported that Walmart is removing around 3500 store co-managers, a salaried role that acts as a lieutenant underneath each store manager. The U.S. in particular has an inordinately high proportion of employees performing routine functions that could be easily automated. As such, retail automation is bound to hit them the hardest. With costs on the rise, and Amazon as a constant looming threat that has resulted in the closing of thousands of mom-and-pop stores across the US, it was inevitable that Walmart would turn to automation as a way to stay competitive in the market. As the largest retail employer in the US, transitions to an automated retailing model, it will leave a good proposition of the 7,04,000 strong US retail workforce either unemployed, underemployed or unready to transition into other jobs. How much Walmart assists its redundant workforce to transition to another livelihood will be litmus test to its widely held image of a caring employer in contrast to Amazon’s ruthless image. How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics
Read more
  • 0
  • 0
  • 3262

article-image-shadow-robot-company-syntouch-and-haptx-ana-holdings-collaborate-on-haptic-robot-hand-that-can-successfully-transmit-touch-across-the-globe
Bhagyashree R
04 Mar 2019
3 min read
Save for later

Shadow Robot Company, SynTouch, and HaptX, ANA Holdings collaborate on ‘haptic robot hand’ that can successfully transmit touch across the globe

Bhagyashree R
04 Mar 2019
3 min read
A new advancement in haptic robots happened when four organizations, Shadow Robot Company, SynTouch, HaptX, and ANA Holdings came together. These companies have built the “world’s first haptic robot hand” that transmits touch to the operator, the details of which they shared on Friday. Credit: Shadow Robot Company [box type="shadow" align="" class="" width=""]Haptics is one of the growing technologies in the field of human-computer interaction that deals with sensory interaction with computers. It is basically the science of applying touch sensation and control for interaction with virtual or physical applications.[/box] How haptic robot hand works? First, the HaptX Gloves capture the motion data to control the movement of the anthropomorphic dexterous hand by Shadow Robot Company. The BioTac sensors built by SynTouch are embedded in each fingertip of the robotic hand to collect tactile data. This data is used to recreate haptic feedback by the HaptX Gloves and is transmitted to the user’s hand. The system was first demonstrated in front of all the collaborating companies. In the demo,  an operator in California used a haptic glove to control a dexterous robotic hand in London, under the guidance of a team from ANA Holdings in Tokyo. When the robot started typing on the computer keyboard, the embedded tactile sensors on the robot’s fingertips recorded the press of each key. The haptic data was shared with the human operator in California through the network in real-time. The words typed by the robot were “Hello, World!”. In the demo, the telerobot was also shown doing a bunch of other things like playing Jenga, building a pyramid of plastic cups, and moving chess pieces on a chess board. Credit: Shadow Robot Company Credit: Shadow Robot Company In an email to us, explaining the applications and importance of this advancement, Kevin Kajitani, Co-Director of ANA AVATAR within ANA Holdings, said, "This achievement by Shadow Robot, SynTouch, and HaptX marks a significant milestone towards achieving the mission of Avatar X. This prototype paves the way for industry use, including medicine, construction, travel, and space exploration." Rich Walker, Managing Director of Shadow Robot Company, said, “This teleoperation system lets humans and robots share their sense of touch across the globe - it’s a step ahead in what can be felt and done remotely. We can now deliver remote touch and dexterity for people to build on for applications like safeguarding people from hazardous tasks, or just doing a job without having to fly there! It’s not touch-typing yet, but we can feel what we touch when we’re typing!” Dr. Jeremy Fishel, Co-Founder of SynTouch, said, “We know from psychophysical studies that the sense of touch is essential when it comes to dexterity and manipulation. This is the first time anyone has ever demonstrated a telerobot with such high-fidelity haptics and control, which is very promising and would not have been possible without the great engineers and technologies from this collaboration.” Jake Rubin, Founder and CEO of HaptX, said, “Touch is a cornerstone of the next generation of human-machine interface technologies. We’re honored to be part of a joint engineering effort that is literally extending the reach of humankind.” The new Bolt robot from Sphero wants to teach kids programming Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out. Shadow Robot joins Avatar X program to bring real-world avatars into space  
Read more
  • 0
  • 0
  • 3463

article-image-apex-ai-announced-apex-os-and-apex-autonomy-for-building-failure-free-autonomous-vehicles
Sugandha Lahoti
20 Nov 2018
2 min read
Save for later

Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles

Sugandha Lahoti
20 Nov 2018
2 min read
Last week, Alphabet’s Waymo announced that they will launch the world’s first commercial self-driving cars next month. Just two days after that, Apex.AI. announced their autonomous mobility systems. This announcement came soon after they closed a $15.5MM Series A funding, led by Canaan with participation from Lightspeed. Basically, Apex. AI designed a modular software stack for building autonomous systems. It easily integrates into existing systems as well as 3rd party software. An interesting thing they claim about their system is the fact that “The software is not designed for peak performance — it’s designed to never fail. We’ve built redundancies into the system design to ensures that single failures don’t lead to system-wide failures.” Their two products are Apex.OS and Apex.Autonomy. Apex.OS Apex.OS is a meta-operating system, which is an automotive version of ROS (Robot Operating System). It allows software developers to write safe and secure applications based on ROS 2 APIs. Apex.OS is built with safety in mind. It is being certified according to the automotive functional safety standard ISO 26262 as a Safety Element out of Context (SEooC) up to ASIL D. It ensures system security through HSM support, process level security, encryption, authentication. Apex.OS improves production code quality through the elimination of all unsafe code constructs. It ships with support for automotive hardware, i.e. ECUs and automotive sensors. Moreover it comes with a complete documentation including examples, tutorials, design articles, and 24/7 customer support. Apex.Autonomy Apex.Autonomy provides developers with building blocks for autonomy. It has well-defined interfaces for easy integration with any existing autonomy stack. It is written in C++, is easy to use, and can be run and tested on Linux, Linux RT, QNX, Windows, OSX. It is designed with production and ISO 26262 certification in mind and is CPU bound on x86_64 and amd64 architectures. A variety of LiDAR sensors are already integrated and tested. Read more about the products on Apex. AI website. Alphabet’s Waymo to launch the world’s first commercial self driving cars next month. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race. Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles.
Read more
  • 0
  • 0
  • 3796

article-image-these-robot-jellyfish-are-on-a-mission-to-explore-and-guard-the-oceans
Bhagyashree R
24 Sep 2018
3 min read
Save for later

These robot jellyfish are on a mission to explore and guard the oceans

Bhagyashree R
24 Sep 2018
3 min read
Earlier last week, a team of US scientists, from Florida Atlantic University (FAU) and the US Office of Naval Research published a paper on five jellyfish robots that they have manufactured. The paper is titled Thrust force characterization of free-swimming soft robotic jellyfish. The prime motive of the scientists to build such robotic jellyfish is to track and monitor fragile marine ecosystems without causing unintentional damage to them. These soft robots can swim through openings narrower than their bodies and are powered by hydraulic silicon tentacles. These so-called ‘jelly-bots’ have the ability to squeeze through narrow openings using circular holes cut in a plexiglass plate. The design structure of ‘Jelly-bots’ Jelly-bots have a similar design to that of a moon jellyfish (Aurelia aurita) during the ephyra stage of its life cycle before they becoming a fully grown medusa. To avoid the damage to fragile biological systems by the robots, soft hydraulic network actuators are chosen. To allow the jellyfish to steer, the team uses two impeller pumps to inflate the eight tentacles. The mold models for the jellyfish robot were designed in SolidWorks and subsequently, 3D printed with an Ultimaker 2 out of PLA (polylactic acid). Each jellyfish has varying rubber hardness to test the effect it has on the propulsion efficiency. Source: IOPScience What this study was about? These jelly robots will help the scientists in determining the impact of the following factors on the measured thrust force: Actuator material Shore hardness Actuation frequency Tentacle stroke actuation amplitude The scientists found that all three of these factors significantly impact mean thrust force generation, which peaks with a half-stroke actuation amplitude at a frequency of 0.8 Hz. Results The material composition of the actuators significantly impacted the measured force produced by the jellyfish, as did the actuation frequency and stroke amplitude. The greatest forces were measured with a half-stroke amplitude at 0.8 Hz and a tentacle actuator-flap material Shore hardness composition of 30–30. In the test, the jellyfish was able to swim through the narrow openings than the nominal diameter of the robot and demonstrated the ability to swim directionally. The jellyfish robots were tested in the ocean and have the potential to monitor and explore delicate ecosystems without inadvertently damaging them. One of the scientists, Dr. Engeberg said to Tech Xplore: "In the future, we plan to incorporate environmental sensors like sonar into the robot's control algorithm, along with a navigational algorithm. This will enable it to find gaps and determine if it can swim through them." To know more in detail about the jellybots, read the research paper published by these scientists. You may also go through a  video showing jellybots functioning in deep waters. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms MEPs pass a resolution to ban “Killer robots” 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 3916
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-the-new-bolt-robot-from-sphero-wants-to-teach-kids-programming
Prasad Ramesh
12 Sep 2018
2 min read
Save for later

The new Bolt robot from Sphero wants to teach kids programming

Prasad Ramesh
12 Sep 2018
2 min read
Sphero, a robotic toy building company announced their latest Bolt robotic ball aimed at teaching kids basic programming. It has advanced sensors, an LED matrix, and infrared sensors to communicate with other Bolt robots. The robot itself is 73mm in diameter. There’s an 8x8 LED matrix inside a transparent casing shell. This matrix displays helpful prompts like a lightning bolt when Bolt is charging. Users can fully program the LED matrix to display a wide variety of icons connected to certain actions. This can be a smiley face when a program is completed, sad face on failure or arrow marks for direction changes. The new Bolt has a longer battery life of around two hours, charges back up in six hours. It connects to the Sphero Edu app to use community created activities, or even to build your own analyze sensor data etc. The casing is now transparent instead of the opaque colored ones from previous Sphero balls. The sphere weighs 200g in all and houses infrared sensors that allow the Bolt to detect other nearby Bolts to interact with. Users can program specific interactions between multiple Bolts. The Edu app supports coding through drawing on the screen or via Scratch blocks. You can also use JavaScript to program the robot to create custom games and drawing. There are sensors to track speed, acceleration, and direction, or to drive BOLT. This can be done without having to aim since the Bolt has a compass. There is also an ambient light sensor that allows programming the Bolt based on the room’s brightness. Other than education, you can also simply drive BOLT and play games with the Sphero Play app. Source: Sphero website It sounds like a useful little robot and is available now to consumers for $149.99. Educators can also buy BOLT in 15-packs for classroom learning. For more details, visit the Sphero website. Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out. How to assemble a DIY selfie drone with Arduino and ESP8266 ROS Melodic Morenia released
Read more
  • 0
  • 0
  • 3290

article-image-is-ros-2-0-good-enough-to-build-real-time-robotic-applications-spanish-researchers-find-out
Prasad Ramesh
11 Sep 2018
4 min read
Save for later

Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out.

Prasad Ramesh
11 Sep 2018
4 min read
Last Friday, a group of Spanish researchers have published a research paper titled ‘Towards a distributed and real-time framework for robots: evaluation of ROS 2.0 communications for real-time robotic applications’. This paper talks about an experimental setup exploring the suitability of ROS 2.0 for real-time robotic applications. In this paper, ROS 2.0 communications is evaluated in a robotic inter-component communication hardware case running on top of Linux. The researchers have benchmarked and studied the worst case latencies and characterized ROS 2.0 communications for real-time applications. The results indicate that a proper real-time configuration of the ROS 2.0 framework reduces jitter making soft real-time communications possible but there were also some limitations that prevented hard real-time communications. What is ROS? ROS is a popular framework that provides services for the development of robotic applications. It has utilities like a communication infrastructure, drivers for a variety of software and hardware components, libraries for diagnostics, navigation, manipulation, and other things. ROS simplifies the process of creating complex and robust robot behavior across many robotic platforms. ROS 2.0 is the new version which extends the concepts of the first version. Data Distribution Service (DDS) middleware is used in ROS 2.0 due to its characteristics and benefits as compared to other solutions. Need for real-time applications in robotic systems In all robotic systems, tasks need to be time responsive. While moving at a certain speed, robots must be able to detect an obstacle and stop to avoid collision. These robot systems often have timing requirements to execute tasks or exchange data. By not meeting the timing requirements, the system behavior will degrade or the system will fail. With ROS being the standard software infrastructure for robotic applications development, demands rose in the ROS community to include real-time capabilities. Hence, ROS 2.0 was created for delivering real-time performance. But to deliver a complete, distributed and real-time solution for robots, ROS 2.0 needs to be surrounded with appropriate elements. These elements are described in the papers Time-sensitive networking for robotics and Real-time Linux communications: an evaluation of the Linux communication stack for real-time robotic applications. ROS 2 uses DDS as its communication middleware. DDS contains Quality of Service (QoS) parameters which can be configured and tuned for real-time applications. The results of the experiment In the research paper, a setup was made to measure the real-time performance of ROS 2.0 communications over Ethernet in a PREEMPT-RT patched kernel. The end-to-end latencies between two ROS 2.0 nodes in different machines was measured. A Linux PC and an embedded device which could represent a robot controller (RC) and a robot component (C) were used for the setup. An overview of the setup can be seen as follows: Source: LinkedIn Some of the results are as follows: Source: LinkedIn The image describes the Impact of RT settings under different system load. They are a) System without additional load without RT settings. b) is system under load without RT settings. c) is system without additional load and RT settings. d) is system under load and RT settings. The results from the experiment showed that a proper real-time configuration of the ROS 2.0 framework and DDS threads greatly reduces the jitter andworst-casee latencies. This mean a smooth and fast communication. However, there were also some limitations when there is noncritical traffic in the Linux Network Stack is in picture. By configuring the network interrupt threads and using Linux traffic control QoS methods, some of the problems could be avoided. The researchers conclude that it is possible to achieve soft real-time communications with mixed-critical traffic using the Linux Network stack. However hard real-time is not possible due to the aforementioned limitations. For a more detailed understanding of the experiments and results, you can read the research paper. Shadow Robot joins Avatar X program to bring real-world avatars into space 6 powerful microbots developed by researchers around the world Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 4363

article-image-shadow-robot-joins-avatar-x-program-to-bring-real-world-avatars-into-space
Savia Lobo
07 Sep 2018
2 min read
Save for later

Shadow Robot joins Avatar X program to bring real-world avatars into space

Savia Lobo
07 Sep 2018
2 min read
Shadow Robots Company, experts at grasping and manipulation for robotic hands announced that they are joining a new space avatar program named AVATAR X. This program is led by ANA HOLDINGS INC. (ANA HD) and Japan Aerospace Exploration Agency (JAXA). AVATAR X aims to accelerate the integration of technologies such as robotics, haptics and Artificial Intelligence (AI), to enable humans to remotely build camps on the Moon, support long-term space missions, and further explore space from Earth. In order to make this possible, Shadow will work closely with the programme’s partners, leveraging its unique teleoperation system that it has already developed and that is also available to purchase. AVATAR X is all set to be launched as a multi-phase programme. It aims to revolutionize space development and make living on the Moon, Mars and beyond, a reality. What will AVATAR X program include? AVATAR X program will comprise of clever elements including Shadow’s Dexterous Hand, which can be controlled by a CyberGlove worn by the operator. This hand will be attached to a UR10 robot arm controllable by a PhaseSpace motion capture tool worn on the operator’s wrist. Both the CyberGlove and Motion Capture wrist tool have mapping capability so that the Dexterous Hand and the robot arm can mimic an operator’s movements. The new system allows remote control of robotic technologies while providing distance and safety. Furthermore, Shadow uses an open source platform providing full access to the code to help users develop the software for their own specific needs. Shadow’s Managing Director, Rich Walker says “We’re really excited to be working with ANA HD and JAXA on the AVATAR X programme and it gives us the perfect opportunity to demonstrate how our robotics technology can be leveraged for avatar or teleoperation scenarios away from UK soil, deep into space. We want everyone to feel involved at such a transformative time in teleoperation capabilities and encourage all those interested to enter the AVATAR XPRIZE competition.” To know more about AVATAR X in detail, visit ANA Group’s press release. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making  
Read more
  • 0
  • 0
  • 3475

article-image-6-powerful-microbots-developed-by-researchers-around-the-world
Prasad Ramesh
01 Sep 2018
4 min read
Save for later

6 powerful microbots developed by researchers around the world

Prasad Ramesh
01 Sep 2018
4 min read
When we hear  the word robot, we may think of large industry sized robots assembling cars or humanoid ones. However there are such tiny robots that you may not even be able to see with the naked eye. Such six microbots are covered in this article which are in early development stages. Harvard's Ambulatory Microrobot (HAMR): A robotic cockroach Source: Hardvard HAMR is a versatile, 1.8-inch-long robotic platform that resembles a cockroach. The HAMR itself weighs in under an ounce and can run, jump and carry small items about twice its own weight. It is fast and can move with the speed of almost 19 inches per second. HAMR has given the researchers a useful base idea from which they can build other ideas. For example, the HAMR-F, an enhanced version of HAMR doesn't have any restraining wires. It can move around independently, it's only slightly heavier (2.8g) and slower than the HAMR. It is powered by a micro 8mA lithium polymer battery. Scientists at Harvard's School of Engineering and Applied Sciences also added footpads recently that allows the microbot to swim on water surface, sink and walk under water. Robotic bees: RoboBees Source: Harvard Like the HAMR, the RoboBee by Harvard has improved over time, it can also fly and swim. Its first successful flight was in 2013 and in 2015 it was able to swim. More recently in 2016, it gained the ability to "perch" on surfaces using static electricity. This allows the RoboBee to save power for loner flights. The 80-milligram robot can take a swim, leap up from the water, and then land. The RoboBee can flap its wings at 220 to 300 hertz in air and 9 to 13 hertz in water. μRobotex: microbots from France Source: Sciencealert Scientists from the Femto-ST Institute in France have built the μRobotex platform. It is a new, extremely small microrobot system. This system has been able to build the smallest house in the world inside a vacuum chamber. The robot used an ion beam to cut a silica membrane to tiny pieces for assembly. The micro house is 0.015 mm high and 0.020 mm broad. In comparison, a grain of sand is anywhere from 0.05 mm to 2 mm in diameter. The completed house was kept on the tip of an optical fiber piece as shown in the image above. Salto: a one-legged jumper Source: Wired Saltatorial locomotion on terrain obstacles (Salto), developed at University of California, is a one-legged jumping robot that is 10.2 inches tall when fully extended. It weighs about 100 grams, and can jump up to 1 meter in air. Salto's skills show when it can do more than just a single jump. It can bounce off walls and can perform several jumps in a row while avoiding obstacles. Salto was inspired by the galago, a small mammal expert at jumping. The idea of Salto was about robots that can leap over rubble, to provide emergency services. The newer model is the Salto-1P. Rolls Royce’s SWARM robots Source: Rolls Royce Rolls-Royce teamed up with scholars from the University of Nottingham and Harvard University to develop independent tiny mobile robots called SWARM. They are about 0.4 inches in diameter. They are a part of Rolls-Royce’s IntelligentEngine program. The SWARM robots are put into position by a robotic snake and use tiny cameras to capture parts of an engine which are hard to access otherwise. This is very useful for mechanics to figure out what is wrong with a car engine with greater accessibility. The future plan for SWARM is to perform inspections of aircraft engines in order to not remove from the airplanes. Short-Range Independent Microrobotic Platforms (SHRIMP) Source: DARPA The Defense Advanced Research Project Agency (DARPA) wants to develop insect-scaled robots with, "untethered mobility, maneuverability, and dexterity." In other words, they want microbots that can move around independently. DARPA is planning to sponsor these robots as part of the SHRIMP program for search and rescue, disaster relief, and hazardous environment inspection. It is also looking for robots that might work as prosthetics or eyes to see in places that are hard to reach. These microbots are in early development stages but on entering production they will be very resourceful. From medical assistance to guided inspection in small areas, these microbots will prove to be useful in a variety of areas. Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial] 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 1
  • 7551
article-image-boston-dynamics-android-of-robots-vision-starts-with-launching-1000-robot-dogs-in-2019
Sugandha Lahoti
23 Jul 2018
2 min read
Save for later

Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019

Sugandha Lahoti
23 Jul 2018
2 min read
A video went viral in February showcasing a dog like robot opening a door for another robot. These agile robots are the brainchild of Boston Dynamics, an American Robotics company. Fast forward to this month, Boston Dynamics is all geared up to produce thousands of these robot dogs. According to a report by Inverse, the company has set a target date of July 2019 to manufacture 1,000 of its SpotMini robot dogs annually. SpotMini is a smaller variant of Boston Dynamics’ many robots. This four-legged robot weighs around 30 kgs and can comfortably fit in an office or home. It is one of the quietest robots built by the company. SpotMini is completely mobile with a 5 degree-of-freedom arm. It also has multiple perception sensors for navigation and mobile manipulation. SpotMini Spot, SpotMini’s elder version stands at close to four feet and weighs about 75 kgs. This four-legged robot is exclusively made for rough terrain mobility and superhuman stability. Its video has been streamed on Youtube nearly 19 million times. Spot According to founder Marc Raibert, SpotMini is currently being tested for use in construction, delivery, security, and home assistance applications. The company has already announced plans to launch it in 2019 as their short-term goal. They have currently built almost ten robodogs by hand, and are in plans to build 100 models with contract manufacturers at the end of this year. In the long run, the company intends SpotMini to eventually become a multi-use platform of sorts. At TechCrunch’s TC Sessions: Robotics event 2018, Raibert stated that “the goal for us is to become the what Android operating system is for phones: a versatile foundation for limitless applications.” Sony resurrects robotic pet Aibo with advanced AI AI powered Robotics : Autonomous machines in the making How to assemble a DIY selfie drone with Arduino and ESP8266 What we learned at the ICRA 2018 conference for robotics & automation
Read more
  • 0
  • 0
  • 4581

article-image-amazon-alexa-and-aws-helping-nasa-improve-their-efficiency
Gebin George
22 Jun 2018
2 min read
Save for later

Amazon Alexa and AWS helping NASA improve their efficiency

Gebin George
22 Jun 2018
2 min read
While everyone is busy playing songs and giving voice commands to Amazon Alexa, the amazing voice assistant developed by Amazon is utilized by the US space agency, NASA, to organize their data-centric tasks efficiently. Chief Technology and Innovation Officer at NASA,Tom Soderstrom, said “If you have Alexa-controlled Amazon Echo smart speaker at home, tell her to enable the 'NASA Mars' app. Once done, ask Alexa anything about the Red Planet and she will come back with all the right answers. This enables serverless computing where we don't need to build for scale but for real-life work cases and get the desired results in a much cheaper way. Remember that voice as a platform is poised to give 10 times faster results. It is kind of a virtual helpdesk. Alexa doesn't need to know where the data is stored or what the passwords are to access that data. She scans and quickly provides us what we need. The only challenge now is to figure out how to communicate better with digital assistants and chatbots to make voice a more powerful medium," emphasized Soderstrom. Serverless computing gives developers the flexibility of deploying and running applications and services without thinking about scale or server management. AWS is the market leader in providing fully-managed infrastructure services, helping organizations to focus more on product development. Alexa, for example, can help JPL (federally-funded research and development centre, managed for NASA) employees scan through 400,000 sub-contracts and get the requested copy of the contract from the data-set right on the desktop in a jiffy. JPL has also integrated conference rooms with Alexa and IoT sensors which helps them solve queries quickly. One of the JPL executives also stressed on the fact that AI is not going to take away the human jobs by saying “ AI will transform industries ranging from healthcare to retail and e-commerce and auto and transportation. Sectors that won't embrace AI will be left behind, Humans are 80 percent effective and machines are also 80 percent effective. When you bring them together, they're nearly 95 percent effective” Hence, voice controlled AI- powered digital assistants are here to stay empowering Digital Transformation. How to Add an intent to your Amazon Echo skills Microsoft commits $5 billion to IoT projects Building Voice technology on IoT projects
Read more
  • 0
  • 0
  • 2373

article-image-ros-melodic-morenia-released
Gebin George
28 May 2018
2 min read
Save for later

ROS Melodic Morenia released

Gebin George
28 May 2018
2 min read
ROS is nothing but a middleware with a set of tools and software frameworks for building and stimulating robots. ROS follows a stable release cycle, coming with a new version every year on 23rd of May. ROS released its Melodic Morenia version this year on the said date, with a decent number of enhancements and upgrades. Following are the release notes: class_loader header deprecation class_loader’s headers has been renamed and the previous ones have been deprecated in an effort to bring them close to multi-platform support and its ROS 2 counterpart. You can refer to the migration script provided for the header replacements and PRs will be released for all the .packages in previous ROS distribution. Kdl_parser package enhancement Kdl_parser has now deprecated a method that was linked with tinyxml (which was already deprecated) The tinyxml replacement code is as follows: bool treeFromXml(const tinyxml2::XMLDocument * xml_doc, KDL::Tree & tree) The deprecated API will be removed in N-turle. OpenCV version update For standardization reason, the OpenCV usage version is restricted to 3.2. Enhancements in pluginlib Similar to class_loader, the headers were deprecated here as well, to bring them closer to multi-platform support. plugin_tool which was deprecated for years, has been finally removed in this version. For more updates on the packages of ROS, refer to ROS Wiki page.
Read more
  • 0
  • 0
  • 3424
article-image-icra-2018-conference-robotics-automation
Savia Lobo
25 May 2018
5 min read
Save for later

What we learned at the ICRA 2018 conference for robotics & automation

Savia Lobo
25 May 2018
5 min read
This year’s ICRA 2018 conference features interactive sessions, keynotes, exhibitions, workshops, and much more. Following are some of the interesting keynotes on machine learning, robotics, and more. Note: International Conference on Robotics and Automation (ICRA) is an international forum for robotics researchers to represent their work. It is a flagship conference of IEEE Robotics and Automation Society. This conference held at the Brisbane Convention and Exhibition Center from the 21st to 25th May, 2018  brings together experts in the field of robotics and automation. The conference includes delegates in the frontier of science and technology in robotics and automation. Implementing machine learning for safe, high-performance control of mobile robots Traditional algorithms are designed based on their a-priori knowledge leveraged from the system and its environment. This knowledge also includes system dynamics and an environment map. Such an approach can allow system to work successfully in a predictable environment. However, if the system is unaware of the environment details, it may lead to high performance losses. In order to build systems that can work efficiently in unknown and uncertain instances, the speaker, Prof. Angela Schoellig, introduces systems that are capable of learning amidst an operation and adapt the behaviour accordingly. Angela presents several approaches for online, data-efficient, and safety-guaranteed learning for robot control. In these approaches, the algorithms can: Leverage insights from control theory Make use of neural networks and Gaussian processes, which are state-of-the-art and probabilistic learning methods. Take into account any prior knowledge about system dynamics. The speaker has also demonstrated how using such novel robot control and learning algorithms can be safe and effective in real-world scenarios. You can check Angela Schoellig’s video below on how she demonstrated these algorithms on self-flying and -driving vehicles, and mobile manipulators. Meta-learning and the art of Learning to Learn Pieter Abbeel, in his talk about meta-learning (learning to learn) explains how reinforcement learning and imitation learning have been successful in various domains such as Atari, Go, and so on. You can also check out 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel presented at the NIPS 2017 conference. Humans have a default potential to learn from past experiences and can learn new skills far more quickly than machines. Pieter explains some of his recent experiments on meta-learning, where agents learn imitation or the reinforcement learning algorithms and using the algorithms as base can learn from past instances just like humans. Due to meta learning, machines can now acquire any skill just by having a single demonstration or few trials. He states that meta-learning can be applied to general examples such as omniglot and mini-imagenet, which are standard few-shot classification benchmarks. To know about meta-learning from the ground up, you can check out our article, What is Meta Learning?. You can also read our coverage on Pieter Abbeel’s accepted paper at the ICLR 2018. Robo-peers: Robust Interaction in Human-Robot Teams Richard Vaughan in this keynote explains how robots would behave in natural surroundings, i.e among humans, animals, and other peer robots. His team has worked on behaviour strategies for mobile robots. These strategies enable the robots to have sensing capabilities and also allow them to behave sophisticated like humans and have robust interactions with the world and other agents around them. Richard further described certain series of vision-mediated Human-Robot Interactions conducted within groups of driving and flying robots. The mechanisms used were simple but highly effective. Form Building Robots to Bridging the Gap between Robotics and AI Robots posses smart, reactive and user-centered programming systems using which they can physically interact with the world. In current scenarios, every layman is capable of using cutting-edge robotics technology for complex tasks such as force-sensitive assembly and safe physical human-robot interaction. Franka Emika’s Panda, the first commercial robot system, is an example of of a robot with such abilities. Sami Haddadin, in this talk offers to bridge the gap between model-based nonlinear control algorithms and data-driven machine learning via a holistic approach. He explains that neither pure control-based nor end-to-end learning algorithms are a close match to human-level general purpose machine intelligence. Two recent results reinforce this statement: i.) Learning of exact articulated robot dynamics by using the concept of first order principle networks. ii.) Learning human-like manipulation skills by combining adaptive impedance control and meta learning Panda was, right from the beginning, released with consistent research interfaces and modules to enable the robotics and AI community to build on the developments in the field until then and to push the boundaries in manipulation, interaction and general AI-enhanced robotics. Sami believes this step will positively enable the community to address the immense challenges in robotics and AI research. Socially Assistive Robots: The Next-Gen Healthcare Helpers Goldie Nejat puts down her concern by stating that the world’s elderly population is rising and so is dementia, a disease with hardly any cure. She says that robots here, can become a unique strategic technology. She further adds that they can become a crucial part of the society by helping the aged population in their day-to-day activities. In this talk she presents intelligent assistive robots, which can be used to improve the life of the older populations. The population also includes those suffering from dementia. She discusses how the assistive robots, Brian, Casper, and Tangy socially have been designed to autonomously provide cognitive and social interventions. These robots also help with activities of daily living, and lead group recreational activities in human-centered environments. These robots can serve as assistants to individuals as well as groups of users. They can personalize their interactions as per the needs of the users. These robots can also be integrated into everyday lives of other people outside the aged bracket. Read more about the other keynotes and highlights on robotics on the ICRA’s official website How to build an Arduino based ‘follow me’ drone AI powered Robotics : Autonomous machines in the making Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 3212

article-image-ai-powered-robotics-autonomous-machines-in-the-making
Savia Lobo
16 Apr 2018
7 min read
Save for later

AI powered Robotics : Autonomous machines in the making

Savia Lobo
16 Apr 2018
7 min read
Say Robot to someone today, and Sophia the humanoid just flashes in front of the eye. This is where robotics has reached, at present; super charged by Artificial Intelligence.  Robotics and Artificial Intelligence are usually confused terms; where there is a thin line between the two. Traditional robots are pre-programmed humanoids or machines meant to do specific tasks irrespective of the environment they are placed in. Therefore, they do not show any intelligent behaviour. With a sprinkle of Artificial Intelligence, these robots have transformed into Artificially intelligent robots, which are now controlled by the AI programs making them capable of taking decisions when encountered by real world situations. How has AI helped Robotics You can look at Artificial intelligence loosely as General or narrow based on the level of task specificity. General AI could be the one from the movie Terminator or Matrix. It imparts wider knowledge and capabilities to machines that are almost similar to humans. However, general AI is way too far in the future and does not exist yet. Current robots are designed to assist humans in their day-to-day tasks in specific domains. For instance, the Roomba Vacuum cleaner is largely automated with very less human intervention. The cleaner can make decisions if it is confronted with choices such as, if the way ahead is blocked by a couch. The cleaner might decide to turn left because it has already vacuumed the carpet to the right. Let’s have a look at some basic capabilities that Artificial Intelligence has imparted into robotics with the example of a self-driving car: Adding power of perception and reasoning: Novel sensors including Sonar sensors, Infrared sensors, Kinect sensors, and so on and their functionalities give robots good perception skills, using which they can self-adapt to any situations. Our self-driving car, with the help of these sensors takes the input data from the environment (such as identifying roadblocks, signals, objects (people), others cars) and labels it, transforms it into knowledge, and interprets it. It then modifies its behaviour based on the result of this perception and takes necessary actions. Learning process: With newer experiences such as heavy traffic, detour, and so on, the self-driving car is required to perceive and reason, in order to obtain conclusions. Here, the AI creates a learning process when similar experiences are repeated in order to store knowledge and speed up intelligent responses. Making correct decisions: With AI the driverless car gets the ability to prioritize actions such as taking another route in case of an accident or detour, or applying sudden brakes when a pedestrian or an object appears suddenly, and so on, in order to be safe and effective in the decisions that they make. Effective Human interaction: This is the most prominent capability that is enabled by Natural Language Processing (NLP). Driverless car accepts and understands the passenger commands with the help of the In-car voice commands based on NLP. Thus, the AI in the car understands the meaning of natural human language and readily responds to the query thrown at it. For instance, based on the destination address given by the passenger, the AI will drive along the fastest route to get there. NLP also helps in understanding human emotions and sentiments. Real-world Applications of AI in Robotics Sophia the humanoid is by far the best real-world amalgamation of Robotics and Artificial Intelligence. However, there other real-world use cases of AI in robotics with practical applications include: Self - supervised learning : This allows robots to create their own training examples for performance improvement. For instance, if the robot has to interpret long-range ambiguous sensor data, it uses apriori training and data that it captured from close range. This knowledge is later incorporated within the robots and within the optical devices that can detect and reject objects (dust and snow, for example). The robot is now capable of detecting obstacles and objects in rough terrain and in 3D-scene analysis and modeling vehicle dynamics. An example of self- supervised learning algorithm is, a road detection algorithm. The front-view monocular camera in the car uses road probabilistic distribution model (RPDM) and fuzzy support vector machines (FSVMs). This algorithm was designed at MIT for autonomous vehicles and other mobile on-road robots. Medical field : In the medical sphere, a collaboration through the Cal-MR: Center for Automation and Learning for Medical Robotics, between researchers at multiple universities and a network of physicians created Smart Tissue Autonomous Robot (STAR). Using innovations in autonomous learning and 3D sensing, STAR is able to stitch together ‘pig intestines’ (used instead of human tissue) with better precision and reliability than the best human surgeons. STAR is not a replacement for surgeons, but in future could remain on standby to handle emergencies and assist surgeons in complex surgical procedures. It would offer major benefits in performing similar types of delicate surgeries. Assistive Robots : these are robots that sense, process sensory information, and perform actions that benefit not only the general public but also people with disabilities, or senior citizens. For instance, Bosch’s driving assistant systems are equipped with radar sensors and video cameras, allowing them to detect these road users even in complex traffic situations. Another example is, the  MICO robotic arm, which uses Kinect sensor. Challenges in adopting AI in Robotics Having an AI robot means lesser pre-programming, replacement of manpower, and so on. There is always a fear that robots may outperform humans in decision making and other intellect tasks. However, one has to take risks to explore what this partnership could lead to. It is obvious that casting an AI environment in robotics is not a cakewalk and there are challenges that experts might face. Some of them include, Legal aspects: After all robots are machines. What if something goes wrong? Who would be liable? One way to mitigate bad outcomes is by developing extensive testing protocols for the design of AI algorithms, improved cybersecurity protections, and input validation standards. This would require AI experts who not only have a deeper understanding of the technologies, but also experts from other disciplines such as law, social sciences, economics and more. Getting used to an automated environment: While it was necessary for traditional robots to be pre-programmed, with AI this will change to a certain extent and experts would just have to feed in the initial algorithms and further changes would be adapted by the robot by self-learning. AI is feared for having the capacity to take over jobs and automate many processes. Hence, broad acceptance of the new technology is required and a careful and managed transition of workers should be carried out. Quick learning with less samples: The AI systems within the robots should assist them in learning quickly even when the supply of data is limited, unlike deep learning which requires hoards of data to formulate an output. The AI-robotics fortune The future for this partnership is bright as robots become more self-dependant and might as well assist humans in their decision making. However, all of this seems like a work of fiction for now. At present, we mostly have semi-supervised learning which requires a certain human touch for essential functioning of AI systems. Unsupervised learning, one shot learning, meta-learning techniques are also creeping in, promising machines that would not require human intervention or guidance any more. Robotics manufacturers such as Silicon Valley Robotics, Mayfield robotics and so on together with auto-manufacturers such as Toyota, BMW are on a path to create autonomous vehicles, which implies that AI is becoming a priority investment for many.
Read more
  • 0
  • 0
  • 3618