Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - IoT and Hardware

119 Articles
article-image-intel-introduces-cryogenic-control-chip-horse-ridge-for-commercially-viable-quantum-computing
Fatema Patrawala
11 Dec 2019
4 min read
Save for later

Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing

Fatema Patrawala
11 Dec 2019
4 min read
On Monday, Intel Labs introduced first of its kind cryogenic control chip codenamed Horse Ridge. According to Intel, Horse Ridge will enable commercially viable quantum computers and speed up development of full-stack quantum computing systems. Intel announced that Horse Ridge will enable control of multiple quantum bits (qubits) and set a clear path toward scaling larger systems. This seems to be a major milestone on the path to quantum practicality. As right now the challenge for quantum computing is that it only works at near-freezing temperatures. Intel is trying to change that with this control chip. As per Intel, Horse Ridge will be able to enable control at very low temperatures, as it will eliminate hundreds of wires going into a refrigerated case that houses the quantum computer. Horse Ridge is developed in partnership with Intel’s research collaborators at QuTech at Delft University of Technology. It is fabricated using Intel’s 22-nanometer FinFET manufacturing technology. The in-house fabrication of these control chips at Intel will dramatically accelerate the company’s ability to design, test, and optimize a commercially viable quantum computer, the company said. “A lot of research has gone into qubits, which can do simultaneous calculations. But Intel saw that controlling the qubits created another big challenge to developing large-scale commercial quantum systems,” states Jim Clarke, Director of quantum hardware, Intel in the official press release . “It’s pretty unique in the community, as we’re going to take all these racks of electronics you see in a university lab and miniaturize that with our 22-nanometer technology and put it inside of a fridge,” added Clarke. “And so we’re starting to control our qubits very locally without having a lot of complex wires for cooling.” The name “Horse Ridge” is inspired from one of the coldest regions in Oregon known as the Horse Ridge. It is designed to operate at cryogenic temperatures, approx 4 degrees Kelvin which is 7 degrees Fahrenheit and 4 degrees Celsius. What is the innovation behind Horse Ridge Quantum computers promise the potential to tackle problems that conventional computers can’t handle by themselves. Quantum computers leverage a phenomenon of quantum physics that allows qubits to exist in multiple states simultaneously. As a result, qubits can conduct a large number of calculations at the same time dramatically speeding up complex problem-solving. But Intel acknowledges the fact that the quantum research community still lags behind in demonstrating quantum practicality, a benchmark to determine if a quantum system can deliver game-changing performance to solve real-world problems. Till date, researchers have focused on building small-scale quantum systems to demonstrate the potential of quantum devices. In these efforts, researchers have relied upon existing electronic tools and high-performance computing rack-scale instruments to connect the quantum system to the traditional computational devices that regulates qubit performance and programs the system inside the cryogenic refrigerator. These devices are often custom designed to control individual qubits, requiring hundreds of connective wires in and out of the refrigerator. However, this extensive control cabling for each qubit hinders the ability to scale the quantum system to the hundreds or thousands of qubits required to demonstrate quantum practicality, not to mention the millions of qubits required for a commercially viable quantum solution. With Horse Ridge, Intel radically simplifies the control electronics required to operate a quantum system. Replacing these bulky instruments with a highly integrated system-on-chip (SoC) will simplify system design and allow for sophisticated signal processing techniques to accelerate set-up time, improve qubit performance, and enable the system to efficiently scale to larger qubit counts. “One option is to run the control electronics at room temperature and run coax cables down to configure the qubits. But you can immediately see that you’re going to run into a scaling problem because you get to hundreds or thousands of cables and it’s not going to work,” said Richard Uhlig, Managing Director Intel Labs. “What we’ve done with Horse Ridge is that it’s able to run at temperatures that are much closer to the qubits themselves. It runs at about 4 degrees Kelvin. The innovation is that we solved the challenges around getting CMOS to run at those temperatures and still have a lot of flexibility in how the qubits are controlled and configured.” To know more about this exciting news, check out the official announcement from Intel. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim The US to invest over $1B in quantum computing, President Trump signs a law Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 3815

article-image-yubico-reveals-biometric-yubikey-at-microsoft-ignite
Fatema Patrawala
07 Nov 2019
4 min read
Save for later

Yubico reveals Biometric YubiKey at Microsoft Ignite

Fatema Patrawala
07 Nov 2019
4 min read
On Tuesday, at the ongoing Microsoft Ignite, Yubico, the leading provider of authentication and encryption hardware, announced the long-awaited YubiKey Bio. YubiKey Bio is the first YubiKey to support fingerprint recognition for secure and seamless passwordless logins. As per the team this feature has been a top requested feature from many of their YubiKey users. Key features in YubiKey Bio The YubiKey Bio delivers the convenience of biometric login with the added benefits of Yubico’s hallmark security, reliability and durability assurances. Biometric fingerprint credentials are stored in the secure element that helps protect them against physical attacks. As a result, a single, trusted hardware-backed root of trust delivers a seamless login experience across different devices, operating systems, and applications. With support for both biometric- and PIN-based login, the YubiKey Bio leverages the full range of multi-factor authentication (MFA) capabilities outlined in the FIDO2 and WebAuthn standard specifications. In keeping with Yubico’s design philosophy, the YubiKey Bio will not require any batteries, drivers, or associated software. The key seamlessly integrates with the native biometric enrollment and management features supported in the latest versions of Windows 10 and Azure Active Directory, making it quick and convenient for users to adopt a phishing-resistant passwordless login flow. “As a result of close collaboration between our engineering teams, Yubico is bringing strong hardware-backed biometric authentication to market to provide a seamless experience for our customers,” said Joy Chik, Corporate VP of Identity, Microsoft. “This new innovation will help drive adoption of safer passwordless sign-in so everyone can be more secure and productive.” The Yubico team has worked with Microsoft in the past few years to help drive the future of passwordless authentication through the creation of the FIDO2 and WebAuthn open authentication standards. Additionally they have built YubiKey integrations with the full suite of Microsoft products including Windows 10 with Azure Active Directory and Microsoft Edge with Microsoft Accounts. Microsoft Ignite attendees saw a live demo of passwordless sign-in to Microsoft Azure Active Directory accounts using the YubiKey Bio. The team also promises that by early next year, enterprise users will be able to authenticate to on-premises Active Directory integrated applications and resources. And provide seamless Single Sign-On (SSO) to cloud- and SAML-based applications. To take advantage of strong YubiKey authentication in Azure Active Directory environments, users can refer to this page for more information. On Hacker News, this news has received mixed reactions while some are in favour of the biometric authentication, others believe that keeping stronger passwords is still a better choice. One of them commented, “1) This is an upgrade to the touch sensitive button that's on all YubiKeys today. The reason you have to touch the key is so that if an attacker gains access to your computer with an attached Yubikey, they will not be able to use it (it requires physical presence). Now that touch sensitive button becomes a fingerprint reader, so it can't be activated by just anyone. 2) The computer/OS doesn't have to support anything for this added feature.” Another user responds, “A fingerprint is only going to stop a very opportunistic attacker. Someone who already has your desktop and app password and physical access to your desktop can probably get a fingerprint off a glass, cup or something else. I don't think this product is as useful as it seems at first glance. Using stronger passwords is probably just as safe.” Google updates biometric authentication for Android P, introduces BiometricPrompt API GitHub now supports two-factor authentication with security keys using the WebAuthn API You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication Microsoft and Cisco propose ideas for a Biometric privacy law after the state of Illinois passed one SafeMessage: An AI-based biometric authentication solution for messaging platforms
Read more
  • 0
  • 0
  • 3969

article-image-researchers-reveal-light-commands-laser-based-audio-injection-attacks-on-voice-control-devices-like-alexa-siri-and-google-assistant
Fatema Patrawala
06 Nov 2019
5 min read
Save for later

Researchers reveal Light Commands: laser-based audio injection attacks on voice-control devices like Alexa, Siri and Google Assistant

Fatema Patrawala
06 Nov 2019
5 min read
Researchers from the University of Electro-Communications in Tokyo and the University of Michigan released a paper on Monday, that gives alarming cues about the security of voice-control devices. In the research paper the researchers presented ways in which they were able to manipulate Siri, Alexa, and other devices using “Light Commands”, a vulnerability in in MEMS (microelectro-mechanical systems) microphones. Light Commands was discovered this year in May. It allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light. This vulnerability can become more dangerous as voice-control devices gain more popularity. How Light Commands work Consumers use voice-control devices for many applications, for example to unlock doors, make online purchases, and more with simple voice commands. The research team tested a handful of such devices, and found that Light Commands can work on any smart speaker or phone that uses MEMS. These systems contain tiny components that convert audio signals into electrical signals. By shining a laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri. Many users do not enable voice authentication or passwords to protect devices from unauthorized use. Hence, an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles. Further researchers also mentioned that Light Commands can be executed at long distances as well. To prove this they demonstrated the attack in a 110 meter hallway, the longest hallway available in the research phase. Below is the reference image where team demonstrates the attack, additionally they have captured few videos of the demonstration as well. Source: Light Commands research paper. Experimental setup for exploring attack range at the 110 m long corridor The Light Commands attack can be executed using a simple laser pointer, a laser driver, and a sound amplifier. A telephoto lens can be used to focus the laser for long range attacks. Detecting the Light Commands attacks Researchers also wrote how one can detect if the devices are attacked by Light Commands. They believe that command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation. Additionally they also mention that so far they have not seen any such cases where the Light Command attack has been maliciously exploited. Limitations in executing the attack Light Commands do have some limitations in execution: Lasers must point directly at a specific component within the microphone to transmit audio information. Attackers need a direct line of sight and a clear pathway for lasers to travel. Most light signals are visible to the naked eye and would expose attackers. Also, voice-control devices respond out loud when activated, which could alert nearby people of foul play. Controlling advanced lasers with precision requires a certain degree of experience and equipment. There is a high barrier to entry when it comes to long-range attacks. How to mitigate such attacks Researchers in the paper suggested to add an additional layer of authentication in voice assistants to mitigate the attack. They also suggest that manufacturers can attempt to use sensor fusion techniques, such as acquiring audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands. Another approach proposed is reducing the amount of light reaching the microphone's diaphragm. This can be possible by using a barrier that physically blocks straight light beams to eliminate the line of sight to the diaphragm, or by implementing a non-transparent cover on top of the microphone hole to reduce the amount of light hitting the microphone. However, researchers also agreed that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to pass through the barriers and create a new light path. Users discuss photoacoustic effect at play On Hacker News, this research has gained much attention as users find this interesting and applaud researchers for the demonstration. Some discuss the laser pointers and laser drivers price and features available to hack the voice assistants. Others discuss how such techniques come to play, one of them says, “I think the photoacoustic effect is at play here. Discovered by Alexander Graham Bell has a variety of applications. It can be used to detect trace gases in gas mixtures at the parts-per-trillion level among other things. An optical beam chopped at an audio frequency goes through a gas cell. If it is absorbed, there's a pressure wave at the chopping frequency proportional to the absorption. If not, there isn't. Synchronous detection (e.g. lock in amplifiers) knock out any signal not at the chopping frequency. You can see even tiny signals when there is no background. Hearing aid microphones make excellent and inexpensive detectors so I think that the mics in modern phones would be comparable. Contrast this with standard methods where one passes a light beam through a cell into a detector, looking for a small change in a large signal. https://chem.libretexts.org/Bookshelves/Physical_and_Theoret... Hats off to the Michigan team for this very clever (and unnerving) demonstration.” Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 4129

article-image-amazons-hardware-event-2019-highlights-a-high-end-echo-studio-the-new-echo-show-8-echo-loops-and-more
Bhagyashree R
30 Sep 2019
10 min read
Save for later

Amazon's hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, Echo Loops, and more

Bhagyashree R
30 Sep 2019
10 min read
At its annual hardware event 2019, Amazon unveiled an avalanche of Alexa-powered products. It introduced a high-end Echo Studio, the new Echo Show 8, an Echo Dot with a clock, and a four-in-one Amazon Smart Oven. The company is also trying to enter the smart wearables market with its Echo Frames eyewear and Echo Loops. It also debuted Echo Buds earbuds, a competition to Apple’s Airpods. Echo Frames and Echo Loops are part of Amazon’s Day 1 Editions program. It is a program for experimental products that are offered with limited availability to see customers’ response and then mass-produced if the response is positive. Alexa becomes more "emotive and expressive" Amazon announced that Alexa now has a multilingual mode. This new mode will be initially available in three countries: the US, Canada, and India. Other than English, Alexa will speak Spanish in the US, French in Canada, and Hindi in India. Customers will be able to interact with Alexa-powered devices in both languages simultaneously. In addition to becoming a polyglot, Alexa will also be more “emotive and expressive” with the help of a new Neural Text to Speech model. Additionally, customers will be able to switch Alexa’s voice to a celebrity voice. It will use the new text-to-speech technology to mimic celebrity voices, with Samuel L. Jackson’ being the first. Amazon will roll out additional celebrity voices next year priced at $0.99 each. Amazon’s steps towards better privacy Amazon’s Alexa has raised several privacy concerns among users. In July, Amazon admitted that a few voice recordings made by Alexa are never deleted from the company’s server, even when the user manually deletes them. Another news in April this year revealed that when you speak to an Echo smart speaker, not only does Alexa but potentially Amazon employees also listen to your requests. In May, two lawsuits were filed in Seattle stating that Amazon is recording voiceprints of children using its Alexa devices without their consent. The company says it is taking a few steps to address these privacy concerns. Amazon’s hardware and services chief Dave Limp announced, “We’re investing in privacy across the board. Privacy cannot be an afterthought when it comes to the devices and services we offer our customers. It has to be foundational and built-in from the beginning for every piece of hardware, software, and service that we create.” Amazon has introduced a new set of features that will give users more control over their stored voice recordings on their Alexa device. Users will be able to hear everything Alexa recorded with the help of voice command and delete them on a rolling three-month or eight-month basis. Amazon’s Ring doorbells have faced criticism from privacy and civil rights advocates because of its ties with police departments. “While more surveillance footage in neighborhoods could help police investigate crimes, the sheer number of cameras run by Amazon's Ring business raises questions about privacy involving both law enforcement and tech giants,” a story by CNET revealed. To somewhat address this concern Ring video doorbells now have a new feature called Home Mode that stops audio and video recording when the owner is home. Coming to the privacy of kids, Amazon announced that parents can use a new setting called Alexa Communications for Kids. This will help them determine the contacts their kids are allowed to interact with when using Echo Dot Kids Edition. The Echo family Echo with improved audio quality Amazon has revived its baseline Echo speaker with improved audio quality. The new audio hardware includes neodymium drivers, more volume, and a stronger mid-range. Users now also have new colorful fabric covers (Twilight Blue, Charcoal, Heather Grey, and Sandstone) to choose from. It is priced at $99, the same as its predecessor. Echo Studio, Amazon's first high-end smart speaker with immersive 3D audio support Source: Amazon Amazon’s big reveal of its first high-end smart speaker, Echo Studio was probably one of the key highlights of the event. It is also the first smart speaker to feature 3D audio with both Dolby Atmos and Sony’s 360 Reality Audio codecs on board. It was built with Amazon’s new Music HD streaming service to provide Echo customers with a way to listen to lossless music. Echo Studio achieves its immersive 3D soundscape with the help of five drivers. These include three 2-inch midrange speakers, a 1-inch tweeter, and a 5.25-inch woofer. Out of the three mid-range speakers, two emit the sound from the sides, while the third emits from the front of the cylinder. These are strategically placed so that Echo Studio is able to “position” sound in a 3D space. Echo Dot with Clock Source: Amazon Amazon’s popular entry-level smart speaker, Echo Dot now has a digital alarm built into the front, next to the speaker grille. Its LED display also allows the Dot to show the weather or a countdown timer. This new version will not replace the current Dot, but will instead exist alongside it in the company’s current Echo lineup. Echo Show 8, a smaller 8-inch version of Echo Show 10 Source: Amazon Back in June, Amazon introduced Echo Show 5, which packs a lot of features into a compact smart display and serves as an alarm clock alternative. There is already a 10-inch flagship model of the Echo device. And, at Wednesday’s event, it announced yet another version of the smart screen: Echo Show 8. Echo Show 8 provides audio quality similar to the 10-inch version and has a built-in privacy shutter. It also includes the new Drop-in On All feature that lets users create a large group chat with family and friends. Echo Loop and Frames This time Amazon has also ventured into smart wearables with Alexa-powered Echo Loops and Frames. The main purpose of these two smart wearables is to enable customers to use Alexa wherever they go, whenever they want. Source: Amazon Echo Loop is a smart ring made out of titanium that activates when you press a tiny discreet button. It features built-in microphones and speakers to facilitate interaction with Alexa. It allows you to shut off the microphones by double-tapping an action button. Echo Loop comes in three sizes: small, medium, large and extra-large. You can also get ring sizing kit to help you figure out which size is best for you. Coming to its battery life, Amazon is promising that it will last about a day. It has a vibrating haptic engine for notifications and connects to your phone's Alexa app via Bluetooth. Source: Amazon Echo Frames look like your typical black-framed spectacles. They are lightweight and compatible with most prescription lenses. It has built-in directional microphones for interacting with Alexa that can be turned off with the double-press of a button when not needed. It relies on Amazon’s open-ear technology to send a response from the assistant to your ears. Echo Buds with Bose noise cancellation technology Source: Amazon Amazon is challenging Apple’s AirPods and Samsung Galaxy Buds with its new Echo Buds. It provides hands-free access to Alexa and includes Bose’s Active Noise Reduction Technology. Each earbud has a pair of balanced armature driver to deliver good bass. Though its five hours of battery life isn't great, charging case brings the total runtime up to 20 hours before you need to plug in again. Echo Glow, a multicolor lamp for kids Source: Amazon Echo Glow is a multicolor lamp for kids that do not have Alexa onboard. However, to make it work you need to connect it to any of your Alexa-enabled devices and ask Alexa to change the color, adjust brightness, and create helpful routines. It can also be controlled with a tap. A couple of its interesting use cases include “rainbow timer”, wake up light alarm, and campfire mode. Echo Flex is a small Echo that plugs directly into the wall Source: Amazon Echo Flex is the affordable and versatile version of Echo Dot smart speaker. You can plug the device directly into a wall outlet to get Alexa’s smart assistant at places where the smart assistant otherwise couldn’t reach. With Echo Flex, you can manage all your compatible smart devices using voice commands. For instance, you can switch on the lamp before getting out of bed or dim the lights from the couch to watch a movie. Amazon Sidewalk, a low-power, low-bandwidth network Most wireless standards including Wi-Fi, ZigBee, and Z-Wave have low-range and are typically confined to your home. Other major wireless standards like LTE have a much larger range, but are expensive, hard to maintain, and eat up a vast amount of power. Amazon says that its Sidewalk network can solve this problem. It is a new wireless standard that casts a signal as far as a mile keeping low-power and low-bandwidth. To achieve this the company has repurposed unlicensed 900 MHz spectrum. This is the same spectrum that is used by cordless phones and walkie talkies to communicate. But unlike walkie-talkies or cordless phones, devices using Amazon Sidewalk will form a mesh network. Among the use cases of this network includes water sensors to keep the plants in your garden quenched or a mailbox device to let you know when you've got mail. The company will also be introducing a smart dog tag next year called Ring Fetch to help you track your dogs. This announcement started a discussion on Hacker News. Though some users were impressed by the Ring Fetch use case, others felt that the company has re-invented the wheel and is trying to introduce another proprietary protocol. “Maaaan, why in gods name do companies have to keep reinventing the wheel. There's so many protocols and specifications out there already that they just have to pick one and improve upon it with the goal of making it backward compatible with "older" versions of the protocol,” a user added. People discussed LoRaWan, a low power, a wide-area networking protocol for connecting battery-operated devices to the internet. A user commented, “LoRaWAN fits exactly this use-case and depending on the region, can operate on any of the ISM bands. This article is very bare on technical details, but I'm so confused. LoRa's made so much effort in this space by literally mapping out every single ISM band they can (sub-GHz) and reaching out to regulators where they couldn't find a compatible match. Amazon can't possibly think the 900 MHz device is "free" globally.” Liz O'Sullivan, an AI activist also shared her perspective on Twitter. https://twitter.com/lizjosullivan/status/1177243350283542528 Amazon also made some announcements for people who love cooking. It unveiled an Alexa-compatible kitchen countertop appliance, the Amazon Smart Oven. It is a 4-in-1 microwave that functions as a convection oven, microwave, air fryer, and food warmer. Users will also be able to leverage a new feature in Alexa called “scan-to-cook”. This will allow them to scan pre-packaged food products including the ones sold by Amazon-owned Whole Foods and Amazon Smart Oven will cook them automatically. Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash CES 2019: Top announcements made so far What if buildings of the future could compute? European researchers make a proposal.  
Read more
  • 0
  • 0
  • 2845

article-image-tesla-software-version-10-0-adds-smart-summon-in-car-karaoke-netflix-hulu-and-spotify-streaming
Sugandha Lahoti
27 Sep 2019
3 min read
Save for later

Tesla Software Version 10.0 adds Smart Summon, in-car karaoke, Netflix, Hulu, and Spotify streaming

Sugandha Lahoti
27 Sep 2019
3 min read
Tesla rolled out a new software version for its Cars - Tesla Software Version 10.0 with a host of features for Model S, Model X, and Model 3 owners. Software v10 has in-car karaoke, entertainment services like Netflix and Hulu, as well as Spotify Premium account access. https://youtu.be/NfMtONBK8dY Probably the most interesting feature is the Smart Summon. If you are a customer who has purchased Full Self-Driving Capability or Enhanced Autopilot, you are eligible for the update. With this feature, you can summon your car or get it to navigate a parking lot, as long as the car is within your line of sight. This feature, Tesla says, is perfect, “if you have an overflowing shopping cart, are dealing with a fussy child, or simply don’t want to walk to your car through the rain.” Tesla’s updated file system now separates videos captured by the car’s camera when in Dashcam and Sentry Mode. They will be auto-deleted when there’s a need to free up storage. Tesla Software Version 10.0 is jam-packed with entertainment options With Tesla Theatre, you can stream Netflix, YouTube, and Hulu or Hulu + Live TV right from your car while parked. Chinese customers have iQiyi and Tencent Video access. Spotify Premium account access is also available in all supported markets, in addition to Slacker Radio and TuneIn. For China customers, Tesla has the Ximalaya service for podcasts and audiobooks. Additionally, you have a karaoke system “Car-aoke”, which includes a library of music and song lyrics that passengers and drivers can use parked or driving. Tesla also added some new navigation features that suggest interesting restaurants and sightseeing opportunities that are within your car’s range. Maps are also improved so that search results will be sorted based on the distance to each destination. Tesla Arcade has a new Cuphead port. Cuphead is a run and gun video game developed and published by StudioMDHR. Using a USB controller, single-player and co-op modes are available to play in the Tesla Edition of Cuphead. Tesla’s new software update has got Twitteratis thrilled. https://twitter.com/mortchad/status/1177301454446460933 https://twitter.com/ChrisJCav/status/1177304907197534208 https://twitter.com/A13Frank/status/1177339094835191808 To receive this update as quickly as possible, Tesla says, make sure your car is connected to Wi-Fi. You’ll automatically receive Version 10.0 when it’s ready for your car based on your location and vehicle configuration — there is no need to request the update. Tesla reports a $408 million loss in its Q2 earnings call; CTO and co-founder, JB Straubel steps down Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”. Tesla is building its own AI hardware for self-driving cars
Read more
  • 0
  • 0
  • 3137

article-image-silicon-interconnect-fabric-replace-printed-circuit-boards-new-ucla-research
Sugandha Lahoti
26 Sep 2019
4 min read
Save for later

Silicon-Interconnect Fabric is soon on its way to replace Printed Circuit Boards, new UCLA research claims

Sugandha Lahoti
26 Sep 2019
4 min read
Researchers from UCLA claim in a news study that printed circuit board could be replaced with what they call silicon-interconnect fabric or Si-IF. This fabric allows bare chips to be connected directly to wiring on a separate piece of silicon. The researchers are Puneet Gupta and Subramanian Iyer, members of the electrical engineering department at the University of California at Los Angeles. How can Silicon-Interconnect Fabric be useful In a report published on IEEE Spectrum on Tuesday, the researchers suggest that printed circuit boards can be replaced with silicon which will especially help in building smaller, lighter-weight systems for wearables and other size-constrained gadgets. They write, “Unlike connections on a printed circuit board, the wiring between chips on our fabric is just as small as wiring within a chip. Many more chip-to-chip connections are thus possible, and those connections are able to transmit data faster while using less energy.” Si-IF can also be useful for building “powerful high-performance computers that would pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.” The silicon-interconnect fabric could possibly dissolute the system-on-chip (SoC) into integrated collections of dielets, or chiplets. The researchers say, “It’s an excellent path toward the dissolution of the (relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF.” The researchers linked up chiplets on a silicon-interconnect fabric built on a 100-millimeter-wide wafer. Unlike chips on a printed circuit board, they can be placed a mere 100 micrometers apart, speeding signals and reducing energy consumption. For evaluating the size, the researchers compared an Internet of Things system based on an Arm microcontroller. Using Si-IF shrinks the size of the board by 70 percent but also reduces its weight from 20 grams to 8 grams. Challenges associated with Silicon-Interconnect Fabric Even though large progress has been made on Si-IF integration over the past few years, the researchers point out that much remains to be done. For instance, there is a need of having a commercially viable, high-yield Si-IF manufacturing process. You also need mechanisms to test bare chiplets as well as unpopulated Si-IFs. New heat sinks or other thermal-dissipation strategies will also be required to take advantage of silicon’s good thermal conductivity. In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems. There is also the need to make several changes to design methodology and to consider system reliability. People agreed that the research looked promising. However, some felt that replacing PCBs with Si-IF sounded overachieving, to begin with. A comment on Hacker News reads, “I agree this looks promising, though I'm not an expert in this field. But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.” Others were also not convinced. A hacker news user pointed out several benefits of PCBs. “ PCBs are cheaper to manufacture than silicon wafers. PCBs can be arbitrarily created and adjusted with little overhead cost (time and money). PCBs can be re-worked if a small hardware fault(s) is found. PCBs can carry large amount of power. PCBs can help absorb heat away from some components. PCBs have a small amount of flexibility, allowing them to absorb shock much easier PCBs can be cut in such a way as to allow for mounting holes or be in relatively arbitrary shapes. PCBs can be designed to protect some components from static damage.” You can read the full research on IEEE. Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more. IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power MIT researchers built a 16-bit RISC-V compliant microprocessor from carbon nanotubes
Read more
  • 0
  • 0
  • 4038
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-samsung-develops-key-value-ssd-prototype-paving-the-way-for-optimizing-network-storage-efficiency-and-extending-server-cpu-processing-power
Savia Lobo
06 Sep 2019
4 min read
Save for later

Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power

Savia Lobo
06 Sep 2019
4 min read
Two days ago, Samsung announced a new prototype key-value Solid State Drive (SSD) that is compatible with industry-standard API for key-value storage devices. The Key value SSD prototype moves the storage workload from server CPUs into the SSD without any supportive device. This will simplify software programming and make more effective use of storage resources in IT applications. The new prototype features extensive scalability, improved durability, improved software efficiency, improved system-level performance, and reduced write amplification (WAF). Applications that are based on software-based KV stores will need to handle garbage collection using a method called compaction. However, this affects system performance as both the host CPU and SSD work to clear away the garbage. “By moving these operations to the SSD in a straightforward, standardized manner, KV SSDs will represent a major upgrade in the way that storage is accessed in the future,” the press release states. Garbage collection can be handled entirely in the SSD, freeing the CPU to handle the computational work. Hangu Sohn, Vice President of NAND Product Planning, Samsung Electronics, said in a press release, “Our KV SSD prototype is leading the industry into a new realm of standardized next-generation SSDs, one that we anticipate will go a long way in optimizing the efficiency of network storage and extending the processing power of the server CPUs to which they’re connected.” Also Read: Samsung speeds up on-device AI processing with a 4x lighter and 8x faster algorithm Samsung’s KV SSD prototype is based on a new open standard for a Key-Value Application Programming Interface (KV API) that was recently approved by Storage Networking Industry Association (SNIA). Michael Oros, SNIA Executive Director, said, “The SNIA KV API specification, which provides an industry-wide interface between an application and a Key Value SSD, paves the way for widespread industry adoption of a standardized KV API protocol.”  Hugo Patterson, Co-founder and Chief Scientist at Datrium said, “SNIA’s KV API is enabling a new generation of architectures for shared storage that is high-performance and scalable. Cloud object stores have shown the power of KV for scaling shared storage, but they fall short for data-intensive applications demanding low latency.” “The KV API has the potential to get the server out of the way in becoming the standard-bearer for data-intensive applications, and Samsung’s KV SSD is a groundbreaking step towards this future,” Patterson added. A user on Hacker News writes, “Would be interesting if this evolves into a full filesystem implementation in hardware (they talk about Object Drive but aren't focused on that yet). Some interesting future possibilities: - A cross-platform filesystem that you could read/write from Windows, macOS, Linux, iOS, Android etc. Imagine having a single disk that could boot any computer operating system without having to manage partitions and boot records! - Significantly improved filesystem performance as it's implemented in hardware. - Better guarantees of write flushing (as SSD can include RAM + tiny battery) that translate into higher level filesystem objects. You could say, writeFile(key, data, flush_full, completion) and receive a callback when the file is on disk. All independent of the OS or kernel version you're running on. - Native async support is a huge win Already the performance is looking insane. Would love to get away from the OS dictating filesystem choice and performance.” To know more about this news in detail, read the report on Samsung Key Value SSD. Other interesting news in Hardware Red Hat joins the RISC-V foundation as a Silver level member AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity
Read more
  • 0
  • 0
  • 2421

article-image-espressif-iot-devices-susceptible-to-wifi-vulnerabilities-can-allow-hijackers-to-crash-devices-connected-to-enterprise-networks
Savia Lobo
05 Sep 2019
4 min read
Save for later

Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks

Savia Lobo
05 Sep 2019
4 min read
Matheus Eduardo Garbelini a member of the ASSET (Automated Systems SEcuriTy) Research Group at the Singapore University of Technology and Design released a proof of concept for three WiFi vulnerabilities in the Espressif IoT devices, ESP32/ESP8266. 3 WiFi vulnerabilities on the ESP32/8266 IoT device Zero PMK Installation (CVE-2019-12587) This WiFi vulnerability hijacks clients on version ESP32 and ESP8266 connected to enterprise networks. It allows an attacker to take control of the WiFi device EAP session by sending an EAP-Fail message in the final step during the connection between the device and the access point. The researcher discovered that both the IoT devices update their Pairwise Master Key (PMK) only when they receive an EAP-Success message. If the EAP-Fail message is received before the EAP-Success, the device skips to update the PMK received during a normal EAP exchange (EAP-PEAP, EAP-TTLS or EAP-TLS). During this time, the device normally accepts the EAPoL 4-Way handshake. Each time ESP32/ESP8266 starts, the PMK is initialized as zero, thus, if an EAP-Fail message is sent before the EAP-Success, the device uses a zero PMK. Thus allowing the attacker to hijack the connection between the AP and the device. ESP32/ESP8266 EAP client crash (CVE-2019-12586) This WiFi vulnerability is found in SDKs of ESP32 and ESP8266 and allows an attacker to precisely cause a crash in any ESP32/ESP8266 connected to an enterprise network. In combination with the zero PMK Installation vulnerability, it could increase the damages to any unpatched device. This vulnerability allows attackers in radio range to trigger a crash to any ESP device connected to an enterprise network. Espressif has fixed such a problem and committed patches for ESP32 SDK, however, the SDK and Arduino board support for ESP8266 is still unpatched. ESP8266 Beacon Frame Crash (CVE-2019-12588) In this WiFi vulnerability, CVE-2019-12588 the client 802.11 MAC implementation in Espressif ESP8266 NONOS SDK 3.0 and earlier does not correctly validate the RSN AuthKey suite list count in beacon frames, probe responses, and association responses. This allows attackers in radio range to cause a denial of service (crash) via a crafted message. Two situations in a malformed beacon frame can trigger two problems: When sending crafted 802.11 frames with the field Auth Key Management Suite Count (AKM) in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. When sending crafted 802.11 frames with the field Pairwise Cipher Suite Count in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. “The attacker sends a malformed beacon or probe response to an ESP8266 which is already connected to an access point. However, it was found that ESP8266 can crash even when there’s no connection to an AP, that is even when ESP8266 is just scanning for the AP,” the researcher says. A user on Hacker News writes, “Due to cheap price ($2—$5 depending on the model) and very low barrier to entry technically, these devices are both very popular as well as very widespread in those two categories. These chips are the first hits for searches such as "Arduino wifi module", "breadboard wifi", "IoT wifi module", and many, many more as they're the downright easiest way to add wifi to something that doesn't have it out of the box. I'm not sure how applicable these attack vectors are in the real world, but they affect a very large number of devices for sure.” To know more about this news in detail, read the Proof of Concept on GitHub. Other interesting news in IoT security Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card
Read more
  • 0
  • 0
  • 6281

article-image-mit-researchers-built-a-16-bit-risc-v-compliant-microprocessor-from-carbon-nanotubes
Amrata Joshi
30 Aug 2019
5 min read
Save for later

MIT researchers built a 16-bit RISC-V compliant microprocessor from carbon nanotubes

Amrata Joshi
30 Aug 2019
5 min read
On Wednesday, MIT researchers published a paper on building a modern microprocessor from carbon nanotube transistors, a greener alternative to the traditional silicon counterparts. The MIT researchers used carbon nanotubes in order to make a general-purpose, RISC-V-compliant microprocessor that can handle 32-bit instructions and does 16-bit memory addressing.  Carbon nanotube naturally comes in semiconducting forms, exhibits electrical properties, and is extremely small. Carbon nanotube field-effect transistors (CNFET) have properties that can give greater speeds and around 10 times the energy efficiency as compared to silicon.  Co-author of this paper, Max M. Shulaker, the Emanuel E Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Microsystems Technology Laboratories, says, “This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing.” Shulaker further added,  “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits. [The paper] completely re-invents how we build chips with carbon nanotubes.” Limitations in carbon nanotubes and how the researchers addressed them According to the research paper, silicon exhibits additional properties as it can be easily doped but in the case of carbon nanotubes, they are small so it becomes difficult to dope them. Also, it is difficult to grow the nanotubes where they're needed and equally difficult to manipulate them or to place them in the right location. When carbon nanotubes are fabricated at scale, the transistors usually come with many defects that can affect the performance, so it becomes impractical to choose them.  To overcome these,  MIT researchers invented new techniques to limit the defects and provide full functional control in fabricating CNFETs with the help of the processes in traditional silicon chip foundries.  Firstly, the researchers made a silicon surface with metallic features that were large enough to let several nanotubes bridge the gaps between the metal.  Then they placed a layer of material on top of the nanotubes and used with sonication to get  rid of the aggregates. Though the material took the aggregates with it, it left the underlying layer of nanotubes without getting them disturbed. To limit nanotubes to where they were needed, the researchers etched off most of the layer of nanotubes and placed them where they were needed. They further added a variable layer of oxide on top of the nanotubes.  The researchers also demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that performs the same tasks similar to commercial microprocessors.  Introduced DREAM technique to attain 99% of purity in carbon nanotubes Advanced circuits need to have carbon nanotubes with around 99.999999 percent purity for becoming robust to failures, which is nearly impossible. The researchers introduced a technique called DREAM (“designing resiliency against metallic CNTs”) that positions metallic CNFETs in a way that they don’t disrupt computing.  This way they relaxed the stringent purity requirement by around four orders of magnitude or 10,000 times, then they required carbon nanotubes at about 99.99 percent purity which was possible to attain. Developed RINSE for cleansing the contamination on the chip For CNFET fabrication, the carbon nanotubes are deposited in a solution onto a wafer along with predesigned transistor architectures. But in this process, carbon nanotubes stick randomly together to form big bundles that lead to the formation of contamination on the chip.   To cleanse contamination, the researchers developed RINSE ( “removal of incubated nanotubes through selective exfoliation”). In this process, the wafer is pretreated with an agent that promotes carbon nanotube adhesion. Later, the wafer is coated with a polymer and is then dipped in a special solvent. It washes away the polymer that carries away the big bundles and single carbon nanotubes remain stuck to the wafer. The RINSE technique can lead to about a 250-times reduction in particle density on the chip as compared to other similar methods. New chip design, RV16X-NANO handles 32 bit long instructions on RISC-V architecture The researchers built a new chip design and drew insights based on the chip. According to the insights, few logical functions were less sensitive to metallic nanotubes than the others. The researchers modified an open-source RISC design tool to take this information into account. It resulted in a chip design that had none of the gates being most sensitive to metallic carbon nanotubes.  Hence, the team named the chip as RV16X-NANO designed to handle the 32-bit-long instructions of the RISC-V architecture. They used more than 14,000 individual transistors for the RV16X-NANO, and every single one of those 14,000 gates did work as per the plan. The chip successfully executed a variant of the traditional "Hello World" program which is used as an introduction to the syntax of different programming languages. In this paper, researchers have focused on ways to improve  their existing design. But the design needs to tolerate metallic nanotubes as it will have multiple nanotubes in each transistor. The design needs to be such that few nanotubes in bad orientations wouldn’t leave enough space for others to form functional contacts.  Researchers major goal was to make single-nanotube transistors, which would require the ability to  control the location of their chip placement. This research proves that it is possible to integrate carbon nanotubes in the existing chipmaking processes, with additional electronics necessary for a processor to function. The researchers have started implementing their manufacturing techniques into a silicon chip foundry via a program by the DARPA (Defense Advanced Research Projects Agency). To know more about this research, check out the official paper published.  What’s new in IoT this week? Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications  
Read more
  • 0
  • 0
  • 2937

article-image-ibm-open-sources-power-isa-and-other-chips-brings-openpower-foundation-under-the-linux-foundation
Vincy Davis
22 Aug 2019
3 min read
Save for later

IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation

Vincy Davis
22 Aug 2019
3 min read
Yesterday, IBM made a huge announcement to seize its commitment to the open hardware movement. At the ongoing Linux Foundation Open Source Summit 2019, Ken King, the general manager for OpenPower at IBM disclosed that the Power Series chipmaker is open-sourcing their Power Instruction Set Architecture (ISA) and other chips for developers to build new hardware.  IBM wants the open community members to take advantage of “POWER's enterprise-leading capabilities to process data-intensive workloads and create new software applications for AI and hybrid cloud built to take advantage of the hardware's unique capabilities,'' says IBM.  At the Summit, King also announced that the OpenPOWER Foundation will be integrated with the Linux Foundation. Launched in 2013, IBM’s OpenPOWER Foundation is a collaboration of Power ISA-based products and has the support of 350 members, including IBM, Google, Hitachi, and Red Hat.  By moving the OpenPOWER foundation under the Linux Foundation, IBM wants the developer community to try the Power-based systems without paying any fee. It will motivate developers to customize their OpenPower chips for applications like AI and hybrid cloud by taking advantage of POWER’s rich feature set. “With our recent Red Hat acquisition and today’s news, POWER is now the only architecture—and IBM the only processor vendor—that can boast of a completely open systems stack, from the foundation of the processor instruction set and firmware all the way through the software,” King adds. Read More: Red Hat joins the RISC-V foundation as a Silver level member The Linux Foundation supports open source projects by providing financial and intellectual resources, infrastructure, services, events, and training. Hugh Blemings, the Executive Director of OpenPOWER Foundation said in a blog post that, “The OpenPOWER Foundation will now join projects and organizations like OpenBMC, CHIPS Alliance, OpenHPC and so many others within the Linux Foundation.” He concludes, “The Linux Foundation is the premier open-source group, and we’re excited to be working more closely with them.” Many developers are of the opinion that IBM open sourcing the ISA is a decision taken too late. A user on Hacker News  comments, “28 years after introduction. A bit late.” Another user says, “I'm afraid they are doing it for at least 10 years too late” Another comment reads, “might be too little too late. I used to be powerpc developer myself, now nearly all the communities, the ecosystem, the core developers are gone, it's beyond repair, sigh” Many users also think that IBM’s announcements are a direct challenge to the RISC-V community. A Redditor comments, “I think the most interesting thing about this is that now RISC-V has a direct competitor, and I wonder how they'll react to IBM's change.” Another user says, “Symbolic. Risc-V, is more open, and has a lot of implementations already, many of them open. Sure, power is more about high performance computing, but it doesn't change that much. Still, nice addition. It doesn't really change substantially anything about Power or it's future adoption” You can visit the IBM newsroom, for more information on the announcements. Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more! IBM continues to layoff older employees solely to attract Millennials to be at par with Amazon and Google IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report
Read more
  • 0
  • 0
  • 3064
article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 3513

article-image-red-hat-joins-the-risc-v-foundation-as-a-silver-level-member
Vincy Davis
12 Aug 2019
2 min read
Save for later

Red Hat joins the RISC-V foundation as a Silver level member

Vincy Davis
12 Aug 2019
2 min read
Last week, RISC-V announced that Red Hat is the latest major company to join the RISC-V foundation. Red Hat has joined as a Silver level member, which carries US$5,000 due per year, including 5 discounted registrations for RISC-V workshops.  RISC-V states in the official blog post that “As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.” RISC-V is a free and open-source hardware instruction set architecture (ISA) which aims to enable extensible software and hardware freedom in computing design and innovation. As a member of the RISC-V foundation, Red Hat now officially agrees to support the use of RISC-V chips. As RISC-V has not released any major software and hardware, per performance, its customer companies will continue using both Arm and RISC-V chips. Read More: RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications In January, Raspberry Pi also joined the RISC-V foundation. Though it has not announced if it will be releasing a RISC-V developer board, instead of using Arm-based chips. IBM has been a RISC-V foundation member for many years. In October last year, Red Hat, the major distributor of open-source software and technology was acquired by IBM for $34 Billion, with an aim to deliver next-generation hybrid multi cloud platform. Subsequently, it would want Red Hat to join the RISC-V Foundation as well. Other tech giants like Google, Qualcomm, Samsung, Alibaba, and Samsung are also part of the  RISC-V foundation. Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation
Read more
  • 0
  • 0
  • 2883

article-image-amd-competes-with-intel-by-launching-epyc-rome-worlds-first-7-nm-chip-for-data-centers-luring-in-twitter-and-google
Bhagyashree R
09 Aug 2019
5 min read
Save for later

AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google

Bhagyashree R
09 Aug 2019
5 min read
On Wednesday, Advanced Micro Devices (AMD) unveiled its highly-anticipated second-generation EPYC processor chip for data centers code-named “Rome”. Since the launch, the company has announced its agreements with many tech giants including Intel’s biggest customers, Twitter and Google. Lisa Su, AMD’s president and CEO, said during her keynote at the launch event, “Today, we set a new standard for the modern data center. Adoption of our new leadership server processors is accelerating with multiple new enterprise, cloud and HPC customers choosing EPYC processors to meet their most demanding server computing needs.” EPYC Rome: The world’s first 7nm server chip AMD first showcased the EPYC Rome chip, the world's first 7 nm server processor, at its Next Horizon 2018 event. Based on the Zen 2 microarchitecture, it features up to eight 7 nm-based chiplet processors with a 14 nm-based IO chip in the center interconnected by an Infinity fabric. This chip aims to offer twice the performance per socket and about 4X floating-point performance as compared to the previous generation of EPYC chips. https://www.youtube.com/watch?v=kC3ny3LBfi4 At the launch, one of the performance comparisons based on SPECrate 2017 int-peak benchmark showed the top-of-the-line 64-core AMD Epyc 7742 processor showed double the performance of the top-of-the-line 28-core Intel Xeon Platinum 8280M. Priced at under $ 7,000 it is a lot more affordable than Intel’s chip priced at $13,000. AMD competes with Intel, the dominant supplier of data center chips AMD’s main competitor in the data center chip realm is Intel, which is the dominant supplier of data center chips with more than 90% of the market share. However, AMD was able to capture a small market share with the release of its first-generation EPYC server chips. Coming up with its second-generation chip that is performant yet affordable gives AMD an edge over Intel. Donovan Norfolk, executive director of Lenovo’s data center group, told DataCenter Knowledge, “Intel had a significant portion of the market for a long time. I think they’ll continue to have a significant portion of it. I do think that there are more customers that will look at AMD than have in the past.” The delay in the launch of Intel’s 10 nm chips also might have worked in favor of AMD. After a long wait, it was officially launched earlier this month. Its 7 nm chips are expected to arrive in 2021. Intel fall behind schedule in launching its 10 nm chips has also worked in favor of AMD. Its 7nm chips will most likely arrive in 2021. The EPYC Rome chip has already grabbed the attention of many tech giants. Google is planning to use the EPYC server chip in its internal data centers and also wants to offer it to external developers as part of its cloud computing offerings. Twitter will start using EPYC server in its data centers later this year. Hewlett Packard Enterprise is already using these chips in its three ProLiant servers and plans to have 12 systems by the end of this year. Dell also plans to add second-gen Epyc servers to its portfolio this fall. Following AMD’s customer announcements, Intel shares were down 0.6%  to $46.42 in after-hours trading. Though AMD’s chips are better than Intel’s chips in some computing tasks, they do lag in a few desirable and advanced features. Patrick Moorhead, founder of Moor Insights & Strategy told the Reuters, “Intel chip features for machine learning tasks and new Intel memory technology being with customers such as German software firm SAP SE (SAPG.DE) could give Intel an advantage in those areas.” This news sparked a discussion on Hacker News. A user said, “This is a big win for AMD and for me it reconfirms that their strategy of pushing into the mainstream features that Intel is trying to hold hostage for the "high end" is a good one. Back when AMD first introduced the 64-bit extensions to the x86 architecture and directly challenged Intel who was selling 64 bits as a "high end" feature in their Itanium line, it was a place where Intel was unwilling to go (commoditizing 64-bit processors). That proved pretty successful for them. Now they have done it again by commoditizing "high core count" processors. Each time they do this I wonder if Intel will ever learn that you can't "get away" with selling something for a lot of money that can be made more cheaply forever. ” Another user commented, “I hope AMD turns their attention to machine learning tasks soon not just against Intel but NVIDIA also. The new Titan RTX GPUs with their extra memory and Nvlink allow for some really awesome tricks to speed up training dramatically but they nerfed it by only selling without a blower-style fan making it useless for multi-GPU setups. So the only option is to get Titan RTX rebranded as a Quadro RTX 6000 with a blower-style fan for $2,000 markup. $2000 for a fan. The only way to stop things like this will be competition in the space.” To know more in detail, you can watch the EPYC Rome’s launch event: https://www.youtube.com/watch?v=9Jn9NREaSvc Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster
Read more
  • 0
  • 0
  • 3293
article-image-intels-10th-gen-10nm-ice-lake-processor-offers-ai-apps-new-graphics-and-best-connectivity
Vincy Davis
02 Aug 2019
4 min read
Save for later

Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity

Vincy Davis
02 Aug 2019
4 min read
After a long wait, Intel has officially launched its first 10th generation core processors, code-named ‘Ice Lake’. The first batch contains 11 highly integrated 10nm processors which showcases high-performance artificial intelligence (AI) features and is designed for sleek 2 in 1s and laptops. The ‘Ice Lake’ processors are manufactured on Intel’s 10nm processor and consist of the 14nm chipset in the same carrier. It includes two or four Sunny Cove cores along with Intel’s Gen 11 Graphics processing unit (GPU). The 10nm measure of the processor indicates the size of the transistors used. The 10 nanometer miniscule length also shows the power of the transistor as it is considered that smaller the transistor, better is its power consumption. Read More: Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’ Chris Walker, Intel corporate vice president and general manager of Mobility Client Platforms in the Client Computing Group says that “With broad-scale AI for the first time on PCs, an all-new graphics architecture, best-in-class Wi-Fi 6 (Gig+) and Thunderbolt 3 – all integrated onto the SoC, thanks to Intel’s 10nm process technology and architecture design – we’re opening the door to an entirely new range of experiences and innovations for the laptop.” Intel was supposed to ship the 10nm processors, way back in 2016. Intel CEO Bob Swan says that the delay was due to the “company’s overly aggressive strategy for moving to its next node.” Intel has also introduced a new processor number naming structure for the 10th generation ‘Ice Lake’ processors which indicates the generation and the level of graphics performance of the processor. Image source: Intel What’s new in the 10th generation Intel core processors? Intelligent performance The 10th generation core processors are the first purpose-built processors for AI on laptops and 2 in 1s. They are built for modern AI-infused applications and contains many features such as: Intel Deep Learning Boost, used for specifically boosting flexibility to run complex AI workloads. It has a dedicated instruction set that accelerates neural networks on the CPU for maximum responsiveness. Up to 1 teraflop of GPU engine compute for sustained high-throughput inference applications Intel’s Gaussian & Neural Accelerator (GNA) provides an exclusive engine for background workloads such as voice processing and noise prevention at ultra-low power, for utmost battery life. New graphics With the Iris Plus graphics, the 10th generation core processors imparts double graphic performance in 1080p and higher-level content creation in 4K video editing, application of video filters and high-resolution photo processing. This is the first time that Intel’s Graphics processing unit (GPU) will support VESA’s Adaptive Sync* display standard. It enables a smoother gaming experience across games like Dirt Rally 2.0* and Fortnite*. According to Intel, this is the industry's first integrated GPU to incorporate variable rate shading for better rendering performance, as it uses the Gen11 graphics architecture.  The 10th generation core processors supports the BT.2020* specification, hence it is possible to view a 4K HDR video in a billion colors. Best connectivity With improved board integration, PC manufacturers can innovate on form factor for sleeker designs with Wi-Fi 6 (Gig+) connectivity and up to four Thunderbolt 3 ports. Intel claims this is the “fastest and most versatile USB-C connector available.” In the first batch of 11 'Ice Lake' processors, there are 6 Ice Lake U series and 5 Ice Lake Y series processors. Given below is the complete Ice Lake processors list. Image Source: Intel Intel has revealed that laptops with the 10th generation core processors can be expected in the holiday season this year. The post also states that they will soon release additional products in the 10th generation Intel core mobile processor family due to increased needs in computing. The upcoming processors will “deliver increased productivity and performance scaling for demanding, multithreaded workloads.”   Users love the new 10th generation core processor features and are especially excited about the Gen 11 graphics. https://twitter.com/Tribesigns/status/1133284822548279296 https://twitter.com/Isaacraft123/status/1156982456408596481 Many users are also expecting to see the new processors in the upcoming Mac notebooks. https://twitter.com/ChernSchwinn1/status/1157297037336928256 https://twitter.com/matthewmspace/status/1157295582844575744 Head over to the Intel newsroom page for more details. Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster
Read more
  • 0
  • 0
  • 4899

article-image-mozilla-releases-webthings-gateway-0-9-experimental-builds-targeting-turris-omnia-and-raspberry-pi-4
Bhagyashree R
29 Jul 2019
4 min read
Save for later

Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4

Bhagyashree R
29 Jul 2019
4 min read
In April, the Mozilla IoT team relaunched Project Things as “WebThings” with its two components: WebThings Gateway and WebThings Framework. WebThings is an open-source implementation of W3C’s Web of Things standard for monitoring and controlling connected devices on the web. On Friday, the team announced the release of WebThings Gateway 0.9 and the availability of its first experimental builds for Turris Omnia. This release is also the first version of WebThings Gateway to support the new Raspberry Pi 4. Along with that, they have also released WebThings framework 0.12. W3C’s Web of Things standard The Internet of Things (IoT) has a lot of potential, but it suffers from a lack of interoperability across platforms. The Web of Things aims to solve this by building a decentralized IoT using the web as its application layer. It provides mechanisms to formally describe IoT interfaces to enable IoT devices and services interact with each other, independent of their underlying implementation. To connect real-world things to the web, each thing is assigned a URI to make them linkable and discoverable. It is currently under the process of standardization at the W3C. Updates in WebThings Gateway 0.9 and WebThings Framework 0.12 WebThings Gateway is a software distribution for smart home gateways that allows users to monitor and control their smart home devices over the web, without a middleman. Among the protocols it supports are HomeKit, ZigBee, Thread, MQTT, Weave, AMQP. Among the languages it supports are JS (Node.js), Python, Rust, Java, and C++. The experimental builds of WebThings Gateway 0.9 are based on OpenWrt, a Linux operating system for embedded devices. They come with a new first-time setup for configuring the gateway as a router and Wi-Fi access point itself instead of connecting to an existing Wi-Fi network. Source: Mozilla However, Mozilla noted that the router configurations are still pretty basic and are not yet ready to replace your existing wireless router. “This is just our first step along the path to creating a full software distribution for wireless routers,” reads the announcement. We can expect support for other wireless routers and router developer brands in the near future. This version ships with a new type of add-on called notifier add-ons. In previous gateway versions, push notifications were the only way for notifying users of any event. But, this mechanism is not supported by all browsers and is also not considered to be the most convenient way of notifying users. As a solution, Mozilla came up with notifier add-ons using which you can create a set of outlets. These outlets will act as an output for a defined rule. For instance, you can set up a rule to get an SMS or an email whenever any motion is detected in your home. You can also configure a notifier with a title, a message, and a priority level. Source: Mozilla The WebThings Gateway 0.9 and WebThings Framework 0.12 bring a few changes to Thing Descriptions as well to make it more aligned with the latest W3C drafts. A Thing Description provides a vocabulary to describe physical devices connected to the web in a machine-readable format with a default JSON encoding. The “name” is now changed to “title” and there are experimental new properties of the Thing Descriptions exposed by the gateway. To know more check out Mozilla’s official announcement. To get started, head over to its GitHub repository. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices
Read more
  • 0
  • 0
  • 3420