Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Embedded Systems

22 Articles
article-image-researchers-reveal-light-commands-laser-based-audio-injection-attacks-on-voice-control-devices-like-alexa-siri-and-google-assistant
Fatema Patrawala
06 Nov 2019
5 min read
Save for later

Researchers reveal Light Commands: laser-based audio injection attacks on voice-control devices like Alexa, Siri and Google Assistant

Fatema Patrawala
06 Nov 2019
5 min read
Researchers from the University of Electro-Communications in Tokyo and the University of Michigan released a paper on Monday, that gives alarming cues about the security of voice-control devices. In the research paper the researchers presented ways in which they were able to manipulate Siri, Alexa, and other devices using “Light Commands”, a vulnerability in in MEMS (microelectro-mechanical systems) microphones. Light Commands was discovered this year in May. It allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light. This vulnerability can become more dangerous as voice-control devices gain more popularity. How Light Commands work Consumers use voice-control devices for many applications, for example to unlock doors, make online purchases, and more with simple voice commands. The research team tested a handful of such devices, and found that Light Commands can work on any smart speaker or phone that uses MEMS. These systems contain tiny components that convert audio signals into electrical signals. By shining a laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri. Many users do not enable voice authentication or passwords to protect devices from unauthorized use. Hence, an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles. Further researchers also mentioned that Light Commands can be executed at long distances as well. To prove this they demonstrated the attack in a 110 meter hallway, the longest hallway available in the research phase. Below is the reference image where team demonstrates the attack, additionally they have captured few videos of the demonstration as well. Source: Light Commands research paper. Experimental setup for exploring attack range at the 110 m long corridor The Light Commands attack can be executed using a simple laser pointer, a laser driver, and a sound amplifier. A telephoto lens can be used to focus the laser for long range attacks. Detecting the Light Commands attacks Researchers also wrote how one can detect if the devices are attacked by Light Commands. They believe that command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation. Additionally they also mention that so far they have not seen any such cases where the Light Command attack has been maliciously exploited. Limitations in executing the attack Light Commands do have some limitations in execution: Lasers must point directly at a specific component within the microphone to transmit audio information. Attackers need a direct line of sight and a clear pathway for lasers to travel. Most light signals are visible to the naked eye and would expose attackers. Also, voice-control devices respond out loud when activated, which could alert nearby people of foul play. Controlling advanced lasers with precision requires a certain degree of experience and equipment. There is a high barrier to entry when it comes to long-range attacks. How to mitigate such attacks Researchers in the paper suggested to add an additional layer of authentication in voice assistants to mitigate the attack. They also suggest that manufacturers can attempt to use sensor fusion techniques, such as acquiring audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands. Another approach proposed is reducing the amount of light reaching the microphone's diaphragm. This can be possible by using a barrier that physically blocks straight light beams to eliminate the line of sight to the diaphragm, or by implementing a non-transparent cover on top of the microphone hole to reduce the amount of light hitting the microphone. However, researchers also agreed that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to pass through the barriers and create a new light path. Users discuss photoacoustic effect at play On Hacker News, this research has gained much attention as users find this interesting and applaud researchers for the demonstration. Some discuss the laser pointers and laser drivers price and features available to hack the voice assistants. Others discuss how such techniques come to play, one of them says, “I think the photoacoustic effect is at play here. Discovered by Alexander Graham Bell has a variety of applications. It can be used to detect trace gases in gas mixtures at the parts-per-trillion level among other things. An optical beam chopped at an audio frequency goes through a gas cell. If it is absorbed, there's a pressure wave at the chopping frequency proportional to the absorption. If not, there isn't. Synchronous detection (e.g. lock in amplifiers) knock out any signal not at the chopping frequency. You can see even tiny signals when there is no background. Hearing aid microphones make excellent and inexpensive detectors so I think that the mics in modern phones would be comparable. Contrast this with standard methods where one passes a light beam through a cell into a detector, looking for a small change in a large signal. https://chem.libretexts.org/Bookshelves/Physical_and_Theoret... Hats off to the Michigan team for this very clever (and unnerving) demonstration.” Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 4152

article-image-silicon-interconnect-fabric-replace-printed-circuit-boards-new-ucla-research
Sugandha Lahoti
26 Sep 2019
4 min read
Save for later

Silicon-Interconnect Fabric is soon on its way to replace Printed Circuit Boards, new UCLA research claims

Sugandha Lahoti
26 Sep 2019
4 min read
Researchers from UCLA claim in a news study that printed circuit board could be replaced with what they call silicon-interconnect fabric or Si-IF. This fabric allows bare chips to be connected directly to wiring on a separate piece of silicon. The researchers are Puneet Gupta and Subramanian Iyer, members of the electrical engineering department at the University of California at Los Angeles. How can Silicon-Interconnect Fabric be useful In a report published on IEEE Spectrum on Tuesday, the researchers suggest that printed circuit boards can be replaced with silicon which will especially help in building smaller, lighter-weight systems for wearables and other size-constrained gadgets. They write, “Unlike connections on a printed circuit board, the wiring between chips on our fabric is just as small as wiring within a chip. Many more chip-to-chip connections are thus possible, and those connections are able to transmit data faster while using less energy.” Si-IF can also be useful for building “powerful high-performance computers that would pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.” The silicon-interconnect fabric could possibly dissolute the system-on-chip (SoC) into integrated collections of dielets, or chiplets. The researchers say, “It’s an excellent path toward the dissolution of the (relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF.” The researchers linked up chiplets on a silicon-interconnect fabric built on a 100-millimeter-wide wafer. Unlike chips on a printed circuit board, they can be placed a mere 100 micrometers apart, speeding signals and reducing energy consumption. For evaluating the size, the researchers compared an Internet of Things system based on an Arm microcontroller. Using Si-IF shrinks the size of the board by 70 percent but also reduces its weight from 20 grams to 8 grams. Challenges associated with Silicon-Interconnect Fabric Even though large progress has been made on Si-IF integration over the past few years, the researchers point out that much remains to be done. For instance, there is a need of having a commercially viable, high-yield Si-IF manufacturing process. You also need mechanisms to test bare chiplets as well as unpopulated Si-IFs. New heat sinks or other thermal-dissipation strategies will also be required to take advantage of silicon’s good thermal conductivity. In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems. There is also the need to make several changes to design methodology and to consider system reliability. People agreed that the research looked promising. However, some felt that replacing PCBs with Si-IF sounded overachieving, to begin with. A comment on Hacker News reads, “I agree this looks promising, though I'm not an expert in this field. But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.” Others were also not convinced. A hacker news user pointed out several benefits of PCBs. “ PCBs are cheaper to manufacture than silicon wafers. PCBs can be arbitrarily created and adjusted with little overhead cost (time and money). PCBs can be re-worked if a small hardware fault(s) is found. PCBs can carry large amount of power. PCBs can help absorb heat away from some components. PCBs have a small amount of flexibility, allowing them to absorb shock much easier PCBs can be cut in such a way as to allow for mounting holes or be in relatively arbitrary shapes. PCBs can be designed to protect some components from static damage.” You can read the full research on IEEE. Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more. IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power MIT researchers built a 16-bit RISC-V compliant microprocessor from carbon nanotubes
Read more
  • 0
  • 0
  • 4071

article-image-ibm-open-sources-power-isa-and-other-chips-brings-openpower-foundation-under-the-linux-foundation
Vincy Davis
22 Aug 2019
3 min read
Save for later

IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation

Vincy Davis
22 Aug 2019
3 min read
Yesterday, IBM made a huge announcement to seize its commitment to the open hardware movement. At the ongoing Linux Foundation Open Source Summit 2019, Ken King, the general manager for OpenPower at IBM disclosed that the Power Series chipmaker is open-sourcing their Power Instruction Set Architecture (ISA) and other chips for developers to build new hardware.  IBM wants the open community members to take advantage of “POWER's enterprise-leading capabilities to process data-intensive workloads and create new software applications for AI and hybrid cloud built to take advantage of the hardware's unique capabilities,'' says IBM.  At the Summit, King also announced that the OpenPOWER Foundation will be integrated with the Linux Foundation. Launched in 2013, IBM’s OpenPOWER Foundation is a collaboration of Power ISA-based products and has the support of 350 members, including IBM, Google, Hitachi, and Red Hat.  By moving the OpenPOWER foundation under the Linux Foundation, IBM wants the developer community to try the Power-based systems without paying any fee. It will motivate developers to customize their OpenPower chips for applications like AI and hybrid cloud by taking advantage of POWER’s rich feature set. “With our recent Red Hat acquisition and today’s news, POWER is now the only architecture—and IBM the only processor vendor—that can boast of a completely open systems stack, from the foundation of the processor instruction set and firmware all the way through the software,” King adds. Read More: Red Hat joins the RISC-V foundation as a Silver level member The Linux Foundation supports open source projects by providing financial and intellectual resources, infrastructure, services, events, and training. Hugh Blemings, the Executive Director of OpenPOWER Foundation said in a blog post that, “The OpenPOWER Foundation will now join projects and organizations like OpenBMC, CHIPS Alliance, OpenHPC and so many others within the Linux Foundation.” He concludes, “The Linux Foundation is the premier open-source group, and we’re excited to be working more closely with them.” Many developers are of the opinion that IBM open sourcing the ISA is a decision taken too late. A user on Hacker News  comments, “28 years after introduction. A bit late.” Another user says, “I'm afraid they are doing it for at least 10 years too late” Another comment reads, “might be too little too late. I used to be powerpc developer myself, now nearly all the communities, the ecosystem, the core developers are gone, it's beyond repair, sigh” Many users also think that IBM’s announcements are a direct challenge to the RISC-V community. A Redditor comments, “I think the most interesting thing about this is that now RISC-V has a direct competitor, and I wonder how they'll react to IBM's change.” Another user says, “Symbolic. Risc-V, is more open, and has a lot of implementations already, many of them open. Sure, power is more about high performance computing, but it doesn't change that much. Still, nice addition. It doesn't really change substantially anything about Power or it's future adoption” You can visit the IBM newsroom, for more information on the announcements. Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more! IBM continues to layoff older employees solely to attract Millennials to be at par with Amazon and Google IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report
Read more
  • 0
  • 0
  • 3072

article-image-red-hat-joins-the-risc-v-foundation-as-a-silver-level-member
Vincy Davis
12 Aug 2019
2 min read
Save for later

Red Hat joins the RISC-V foundation as a Silver level member

Vincy Davis
12 Aug 2019
2 min read
Last week, RISC-V announced that Red Hat is the latest major company to join the RISC-V foundation. Red Hat has joined as a Silver level member, which carries US$5,000 due per year, including 5 discounted registrations for RISC-V workshops.  RISC-V states in the official blog post that “As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.” RISC-V is a free and open-source hardware instruction set architecture (ISA) which aims to enable extensible software and hardware freedom in computing design and innovation. As a member of the RISC-V foundation, Red Hat now officially agrees to support the use of RISC-V chips. As RISC-V has not released any major software and hardware, per performance, its customer companies will continue using both Arm and RISC-V chips. Read More: RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications In January, Raspberry Pi also joined the RISC-V foundation. Though it has not announced if it will be releasing a RISC-V developer board, instead of using Arm-based chips. IBM has been a RISC-V foundation member for many years. In October last year, Red Hat, the major distributor of open-source software and technology was acquired by IBM for $34 Billion, with an aim to deliver next-generation hybrid multi cloud platform. Subsequently, it would want Red Hat to join the RISC-V Foundation as well. Other tech giants like Google, Qualcomm, Samsung, Alibaba, and Samsung are also part of the  RISC-V foundation. Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation
Read more
  • 0
  • 0
  • 2890

article-image-debian-gnu-linux-port-for-risc-v-64-bits-why-it-matters-and-roadmap
Amrata Joshi
20 Jun 2019
7 min read
Save for later

Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap

Amrata Joshi
20 Jun 2019
7 min read
Last month, Manuel A. Fernandez Montecelo, a Debian contributor and developer talked about the Debian GNU/Linux riscv64 port at the RISC-V workshop. Debian, a Unix-like operating system consists of free software supported by the Debian community that comprises of individuals who basically care about free and open-source software. The goal of the Debian GNU/Linux riscv64 port project has been to have Debian ready for installation and running on systems that implement a variant of the RISC-V (an open-source hardware instruction set architecture) based systems. The feedback from the people regarding his presentation at the workshop was positive. Earlier this week,  Manuel A. Fernandez Montecelo announced an update on the status of Debian GNU/Linux riscv64 port. The announcement comes weeks before the release of buster which will come with another set of changes to benefit the port. What is RISC-V used for and why is Debian interested in building this port? According to the Debian wiki page, “RISC-V (pronounced "risk-five") is an open source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles. In contrast to most ISAs, RISC-V is freely available for all types of use, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open ISA, it is significant because it is designed to be useful in modern computerized devices such as warehouse-scale cloud computers, high-end mobile phones and the smallest embedded systems. Such uses demand that the designers consider both performance and power efficiency. The instruction set also has a substantial body of supporting software, which fixes the usual weakness of new instruction sets. In this project the goal is to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA: Software-wise, this port will target the Linux kernel Hardware-wise, the port will target the 64-bit variant, little-endian This ISA variant is the "default flavour" recommended by the designers, and the one that seems to attract more interest for planned implementations that might become available in the next few years (development boards, possible consumer hardware or servers).” Update on Debian GNU/Linux riscv64 port Image source: Debian Let’s have a look at the graph where the percent of arch-dependent packages that are built for riscv64 (grey line) has been around or higher than 80% since mid-2018. The arch-dependent packages are almost half of Debian's [main, unstable] archive. It means that the arch-independent packages can be used by all the ports, provided that the software is present on which they rely on. The update also highlights that around 90% of packages from the whole archive has been made available for this architecture. Image source: Debian The graph above highlights that the percentages are very stable for all architectures. Montecelo writes, “This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems).” Even the second-class ports appear to be stable. Montecelo writes, “Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things just work.” According to him, apart from the work of porters themselves, there are people working on bootstrapping issues that make it easier to bring up ports, better than in the past. They also make coping better when toolchain support or other issues related to ports, blow up. He further added, “And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for the needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.” Future scope and improvements yet to come To get Debian running on RISC-V will not be easy because of various reasons including limited availability of hardware being able to run Debian port and limited options for using bootloaders. According to Montecelo, this is an area of improvement from them. He further added, “Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.” Presently, they are beyond 500 packages from the Rust ecosystem in the archive (which is about 4%) which can’t be built and used until Rust gets support for the architecture. Rust requires LLVM and there’s no Rust compiler based on GCC or other toolchains. Montecelo writes, “Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term." Apart from Rust, other packages use LLVM to some extent, but currently, it is not fully working for riscv64. The support of LLVM for riscv64 is expected to be completed this year. While talking about other programming languages, he writes, “There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is a long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.” Why are people excited about this? Many users seem to be excited about the news, one of the reasons being that there won’t be a need to bootstrap from scratch as Rust now will be able to cross-compile easily because of the Riscv64 support. A user commented on HackerNews, “Debian Rust maintainer here. We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.” Also, this appears to be a good news for Debian, as cross-compiling has really come a long way on Debian. Rest are awaiting for more to get incorporated with riscv. Another user commented, “I am waiting until the Bitmanip extension lands to get excited about RISC-V: https://github.com/riscv/riscv-bitmanip” Few others think that there is a need for LLVM support for riscv64. A user commented, “The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.” Another comment reads, “Basically LLVM is now a dependency of equal importance to GCC for Debian. Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.” According to users, the architecture of this port misses on two major points, one being the support for LLVM compiler and the other one being the support for Rust based on GCC. If the port gets the LLVM support by this year, users will be able to develop a front end for any programming language as well as a backend for any instruction set architecture. Now, if we consider the case of support for Rust based on GCC, then the port will help developers to get support for many language extensions as GCC provides the same. A user commented on Reddit, “The main blocker to finish the port is having a working Rust toolchain. This is blocked on LLVM support, which only supports RISCV32 right now, and RISCV64 LLVM support is expected to be finished during 2019.” Another comment reads, “It appears that enough people in academia are working on RISCV for LLVM to accept it as a mainstream backend, but I wish more stakeholders in LLVM would make them reconsider their policy.” To know more about this news, check out Debian’s official post. Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now? Are Debian and Docker slowly losing popularity?  
Read more
  • 0
  • 0
  • 4889

article-image-google-i-o-2019-flutter-ui-framework-now-extended-for-web-embedded-and-desktop
Sugandha Lahoti
08 May 2019
4 min read
Save for later

Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop

Sugandha Lahoti
08 May 2019
4 min read
At the ongoing 2019 Google I/O, Google made a major overhaul to its Flutter UI framework. Flutter is now expanded from mobile to multi-platform. The company released the first technical preview of Flutter for web. The core framework for mobile devices was also upgraded to Flutter 1.5. For desktop, Flutter is being used as an experimental project. It is not production-ready, but the team has published early instructions for developing  apps to run on Mac, Windows, and Linux. An embedding API for Flutter is also available that allows it to be used in scenarios for home and automotives. Google notes, “The core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.” Flutter for Web Flutter for web allows web-based applications to be built using the Flutter framework. Per Google, with Flutter for web you can create “highly interactive, graphically rich content,” though it plans to continue evolving this version with a “focus on performance and harmonizing the codebase.” It allows developers to compile existing Flutter code written in Dart into a client experience that can be embedded in the browser and deployed to any web server. Google teamed up with the New York Times to build a small puzzle game called Kenken as an early example of what can be built using Flutter for Web. This game uses the same code across Android, iOS, the web and Chrome OS. Source: Google Blog Flutter 1.5 Flutter 1.5 hosts a variety of new features including updates to its iOS and Material widget and engine support for new mobile devices types. The latest release also brings support for Dart 2.3 with extensive UI-as-code functionality. It also has an in-app payment library which will make monetizing Flutter based apps easier. Google also showcased an ML Kit Custom Image Classifier, built using Flutter and Firebase at Google I/O 2019. The kit offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app. Google has also released a comprehensive new training course for Flutter, built by The App Brewery. Their new course is available for a time-limited discount from $199 to just $10. Netizens had trouble acknowledging Google’s move and were left wondering as to whether Google wants people to invest in learning Dart or Kotlin. For reference, Flutter is entirely built in Dart and Google made two major announcements for Kotlin at the Google I/O. Android development will become increasingly Kotlin-first, and Google announcing the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers. A comment on Hacker News reads, “This is massively confusing. Do we invest in Kotlin ...or do we invest in Dart? Where will Android be in 2 years: Dart or Kotlin?” In response to this, another comment reads, “I don't think anyone has a definite answer, not even Google itself. Google placed several bets on different technologies and community will ultimately decide which of them is the winning one. Personally, I think native Android (Kotlin) and iOS (Swift) development is here to stay. I have tried many cross-platform frameworks and on any non-trivial mobile app, all of them cause more problem than they solve.” Another said, “If you want to do android development, Kotlin. If you want to do multi-platform development, flutter.” “Invest in Kotlin. Kotlin is useful for Android NOW. Whenever Dart starts becoming more mainstream, you'll know and have enough time to react to it”, was another user’s opinion. Read the entire conversation on Hacker News. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 You can now permanently delete your location history and web and app activity data on Google Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with a focus on AI and developer productivity
Read more
  • 0
  • 0
  • 6920
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-intel-releases-patches-to-add-linux-kernel-support-for-upcoming-dedicated-gpu-releases
Melisha Dsouza
18 Feb 2019
2 min read
Save for later

Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases

Melisha Dsouza
18 Feb 2019
2 min read
Last week, Intel released a big patch series to introduce the concept of memory regions to the Intel Linux graphics driver, which is being added to the Intel "i915" Linux kernel DRM driver. Intel stated that these patches were in “preparation for upcoming devices with device local memory”, without giving out any specific details of these “upcoming devices”. It was in December 2018, that Intel had made its plans clear that it's working on everything from integrated GPUs and discrete graphics for gaming to GPUs for data centers. Fast forward to 2019, Intel is now testing the drivers required to make them run. Phoronix was the first to speculate that this device-local memory was for Intel's discrete graphics cards with dedicated vRAM; expected to debut in 2020. Specifying their motivations behind the release of the new patches, Intel tweeted: https://twitter.com/IntelGraphics/status/1096537915222642689 Amongst other features once implemented, the patches will allow a system to: Have different "regions" of memory for system memory as for any device local memory (LMEM). Introduce a simple allocator and allow the existing GEM memory management code to allocate memory to different memory regions. Providing fake LMEM (local memory) regions to exercise a new code path. These patches will lay the groundwork for Linux support for the upcoming dedicated GPU’s. According to Phoronix’s Michael Larabel, "With past generations of Intel graphics, we generally see the first Linux kernel patches roughly a year or so out from the actual hardware debut." Twitter users have expressed enthusiasm towards this announcement: https://twitter.com/benjamimgois/status/1096544747597037571 https://twitter.com/ebound/status/1096498313392783360 You can head over to Freedesktop.org to have a look at these patches. Researchers prove that Intel SGX and TSX can hide malware from antivirus software Uber releases AresDB, a new GPU-powered real-time Analytics Engine TensorFlow team releases a developer preview of TensorFlow Lite with new mobile GPU backend support
Read more
  • 0
  • 0
  • 3428

article-image-amd-releases-amd-open-source-driver-for-vulkan-v-2019-q1-2
Bhagyashree R
23 Jan 2019
2 min read
Save for later

AMD releases AMD Open-Source Driver for Vulkan v-2019.Q1.2

Bhagyashree R
23 Jan 2019
2 min read
Last week, the AMD team released v-2019.Q1.2 version of AMD Open Source for Vulkan (AMDVLK). This release comes with fairly small updates including a DXVK fix, one new Vulkan extension, and some more updates. What’s new in v-2019.Q1.2 The XGL code exposes YUV planes directly to allow applications to implement their own color conversion. Symbols are now not included when building the driver in its release confirmation, which could help with performance. The default WgpMode is updated from wgp to cu The performance regression introduced by the updates that added support for the LOAD_INDEX path for handling pipeline binds is now fixed. AMDVLK architecture: The following diagram shows its architecture: Souce: GitHub AMD open-sourced AMDVLK in 2017, which was earlier the part of AMDGPU-PRO driver. It is a Vulkan driver for Radeon graphics adapters on Linux and is built on top of AMD’s Platform Abstraction Library (PAL). PAL provides hardware and OS abstractions for Radeon (GCN+) user-mode 3D graphics drivers. It also provides users with a consistent experience across platforms, including support for recently released GPUs and compatibility with AMD developer tools. As PAL does not come with a shader compiler, clients are expected to use an external compiler library that targets PAL's Pipeline ABI to produce compatible shader binaries. Shaders compile a VkPipeline object as a single entity by shaders using the LLVM-Based Pipeline Compiler (LLPC) library. LLPC is built on the existing shader compilation infrastructure of LLVM for AMD GPUs to generate code objects that are compatible with PAL’s pipeline ABI. To know more in detail about AMDVLK, you can check out its GitHub repository. AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans AMD open sources V-EZ, the Vulkan wrapper library AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs
Read more
  • 0
  • 0
  • 3794

article-image-amazon-freertos-adds-a-new-bluetooth-low-energy-support-feature
Natasha Mathur
27 Nov 2018
2 min read
Save for later

Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature

Natasha Mathur
27 Nov 2018
2 min read
Amazon team announced a newly added ‘bluetooth low energy support’ (BLE) feature to its  Amazon FreeRTOS. Amazon FreeRTOS is an open source, free to download and use IoT operating system for microcontrollers makes it easy for you to program, deploy, secure, connect, and manage small, low powered devices. It extends the FreeRTOS kernel (a popular open source operating system for microcontrollers) using software libraries that make it easy for you to connect your small, low-power devices to AWS cloud services or to more powerful devices that run AWS IoT Greengrass, a software that helps extend the cloud capabilities to local devices. Amazon FreeRTOS With the helo of Amazon FreeRTOS, you can collect data from them for IoT applications. Earlier, it was only possible to configure devices to a local network using common connection options such as Wi-Fi, and Ethernet. But, now with the addition of the new BLE feature, you can securely build a connection between Amazon FreeRTOS devices that use BLE  to AWS IoT via Android and iOS devices. BLE support in Amazon FreeRTOS is currently available in beta. Amazon FreeRTOS is widely used in industrial applications, B2B solutions, or consumer products companies like the appliance, wearable technology, or smart lighting manufacturers. For more information, check out the official Amazon freeRTOS update post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 3968

article-image-hybrid-nanomembranes-make-conformal-wearable-sensors-possible-demo-south-korean-researchers-with-imperceptible-loudspeakers-and-mics
Natasha Mathur
20 Sep 2018
4 min read
Save for later

Hybrid nanomembranes make conformal wearable sensors possible, demo South Korean researchers with imperceptible loudspeakers and mics

Natasha Mathur
20 Sep 2018
4 min read
A team of researchers from Ulsan National Institute of Science and Technology (UNIST) in South Korea has developed an ultrathin, and transparent wearable device that is capable of turning your skin into a loudspeaker. The device has been created to help the hearing and speech impaired people. However, it has potential applications in other domains such as wearable IoT sensors, and healthcare devices.                                                Skin-attachable NM  loudspeaker This new device is created with conductive hybrid nanomembranes (NMs) with nanoscale thickness, comprising an orthogonal silver nanowire array embedded in a polymer matrix. This helps substantially enhance the electrical as well as mechanical properties of ultrathin polymer NMs. There is no loss in the optical transparency because of the orthogonal array structure. “Here, we introduce ultrathin, conductive, and transparent hybrid NMs that can be applied to the fabrication of skin-attachable NM loudspeakers and microphones, which would be unobtrusive in appearance because of their excellent transparency and conformal contact capability” as mentioned in the research paper. Hybrid NMs help significantly enhance the electrical and mechanical properties of ultrathin polymer NMs, which can then be intimately attached to the human skin. After this, the nanomembrane is used as a loudspeaker which can be attached to almost anything to produce sounds. The researchers also introduced a similar device, which acts as a microphone that can be connected to smartphones and computers for unlocking voice-activated security systems. Skin-attachable and transparent NM loudspeaker The researchers fabricated a skin-attachable loudspeaker using hybrid NMs. This speaker is capable of emitting thermoacoustic sound with the help of temperature-induced oscillation of the surrounding air. This temperature oscillation is caused by Joule heating of the orthogonal AgNW array upon the application of an AC voltage. The sound emitted from the NM loudspeaker is then analyzed with the help of an acoustic measurement system. “We used a commercial microphone to collect and record the sound produced by the loudspeaker. To characterize the sound generation of the loudspeaker, we confirmed that the sound pressure level (SPL) of the output sound increases linearly as the distance between the microphone and the loudspeaker decreases” reads the research paper. Wearable and transparent NM microphone The researchers also designed a wearable and transparent microphone using hybrid NMs combined with micropatterned PDMS (NM microphone). This microphone is capable of detecting sound and recognizing a human voice. These wearable microphones are sensors, which are attached to a speaker's neck for sensing the vibration of the vocal folds.                                        Skin-attachable NM Microphone The skin-attachable NM microphone comprises a hybrid NM mounted to a micro pyramid-patterned polydimethylsiloxane (PDMS) film. This sandwich-like structure helps precisely detect the sound and vibration of the vocal cords by the generation of a triboelectric voltage. The triboelectric voltage results from the coupling effect of the contact electrification as well as electrostatic induction. This sensor works by converting the frictional force that is generated by the oscillation of the transparent conductive nanofiber into electric energy. The sensitivity of the NM microphone in response to sound emissions is evaluated by fabricating two device structures, such as a freestanding hybrid NM, integrated with a holey PDMS film (NM microphone), and another fully adhered to a planar PDMS film without a hole. “As a proof-of-concept demonstration, our NM microphone was applied to a personal voice security system requiring voice-based identification applications. The NM microphone was able to accurately recognize a user’s voice and authorize access to the system by the registrant only” reads the research paper.   For more details, check out the official research paper. Now Deep reinforcement learning can optimize SQL Join Queries, says UC Berkeley researchers MIT’s Transparency by Design Network: A high-performance model that uses visual reasoning for machine interpretability Swarm AI that enables swarms of radiologists, outperforms specialists or AI alone in predicting Pneumonia
Read more
  • 0
  • 0
  • 3086
article-image-german-iot-startup-relayr-acquired-by-munich-re-for-300-million
Richard Gall
04 Sep 2018
3 min read
Save for later

German IoT startup relayr acquired by Munich Re for $300 million

Richard Gall
04 Sep 2018
3 min read
Relayr, an IoT middleware startup based in Berlin, has been purchased by German insurance group Munich Re. The deal, which values relayr at $300 million, gives Munich Re’s subsidiary company HSB 100% equity in the startup. The move is significant, marking an important milestone in relayr’s life as it has moved from a crowdfunded chocolate-shaped IoT kit to an industrial IoT middleware platform used by 130 businesses. Essentially, relayr provides businesses with the software needed to connect industrial infrastructure to the internet in order for information and data about the performance and safety of that machinery to be managed and analyzed from a centralized place. But, perhaps even more importantly, the acquisition is evidence of just how attractive IoT is to an insurance industry that sees data as a potential goldmine in gaining a detailed understanding of behavior and risk in a huge range of contexts and across demographics. It’s worth noting that HSB has invested in relayr before - back in 2016 the company put money into the startup’s series B round of funding. What relayr and Munich Re had to say relayr CEO Josef Brunner had this to say about the acquisition: “We are delighted to strengthen our relationship with Munich Re/HSB to push digitalization in commercial and industrial markets and strive for our mission to help commercial and industrial businesses stay relevant… The unique combination of the companies demonstrates the importance to deliver business outcomes to customers and the need to combine first-class technology and its delivery with powerful financial and insurance offerings. This transaction is a great opportunity to build a global category leader.” Meanwhile, Torsten Jeworrek from Munich Re’s Board of Management said that the acquisition “supports our strategy to combine our knowledge of risk, data analysis skills and financial strength with the technological expertise of relayr. This is our basis to develop new ideas for tomorrow’s commercial and industrial worlds.” You can hear in the enthusiasm of both statements that this is a deal that works incredibly well for both parties. Munich Re now has its hands on an Industrial IoT startup that is already making headway in the market, while relayr now has the stability and support it needs to grow its business further. It will be interesting to see how the acquisition influences relayr’s product development and how involved its parent company will be. Read next Why the Industrial Internet of Things (IIoT) needs Architects Infosys and Siemens collaborate to build IoT solutions on MindSphere IoT Forensics: Security in an always connected world where things talk
Read more
  • 0
  • 0
  • 3189

article-image-nsa-researchers-present-security-improvements-for-zephyr-and-fucshia-at-linux-security-summit-2018
Bhagyashree R
04 Sep 2018
5 min read
Save for later

NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018

Bhagyashree R
04 Sep 2018
5 min read
Last week, James Carter and Stephen Smalley presented the architecture and security mechanisms of two operating systems, Zephyr and Fuchsia at the Linux Security Summit 2018. James and Stephen are computer security researchers in the Information Assurance Research organization of the US National Security Agency (NSA). They discussed the current concerns in the operating systems and their contribution and others to further advance security of these emerging open source operating systems. They also compared the security features of Zephyr and Fucshia to Linux and Linux-based systems such as Android. Zephyr Zephyr is a scalable real-time operating system (RTOS) for IoT devices, supporting cross-architecture with security as the main focus. It targets devices that are resource constrained seeking to be a new "Linux" for little devices. Protection mechanisms in Zephyr Zephyr introduced basic hardware-enforced memory protections in the v1.8 release and these were officially supported in the v1.9 releases. The microcontrollers should either have a memory protection unit (MPU) or a memory management unit (MMU) to support these protection mechanisms. These mechanisms provide protection by the following ways: They enforce Read Only/No Execute (RO/NX) restrictions to protect the read-only data from tampering. Provides runtime support for stack depth overflow protections. The researchers’ contribution was to review the basic memory protections and also develop a set of kernel memory protection tests that were modeled after subset of lkdtm tests in Linux from KSPP. These tests were able to detect bugs and regression in Zephyr MPU drivers and are now a part of the standard regression testing that Zephyr performs on all future changes. Userspace support in Zephyr In previous versions, everything ran in a supervisor mode, so Zephyr introduced a userspace support in v1.10 and v1.11. This requires the basic memory protection support and MPU/MMU. It provides basic support for user mode threads with isolated memory. The researchers contribution, here, was to develop userspace tests to verify some of the security-relevant properties for user mode threads, confirm the correctness of x86 implementation, and validate initial ARM and ARC userspace implementations. App shared memory: A new feature contributed by the researchers Originally, Zephyr provided an access to all the user threads to the global variables of all applications. This imposed high burden on application developers to, Manually organize application global variable memory layout to meet (MPU-specific) size/alignment restrictions. Manually define and assign memory partitions and domains. To solve this problem, the researchers developed a new feature which will come out in v1.13 release, known as App Shared Memory, having features: It is a more developer-friendly way of grouping application globals based on desired protections. It automatically generates linker script, section markings, memory partition/domain structures. Provides helpers to ease application coding. Fucshia Fucshia is an open source microkernel-based operating system, primarily developed by Google. It is based on a new microkernel called Zircon and targets modern hardware such as phones and laptops. Security mechanisms in Fucshia Microkernel security primitives Regular handles: Through handles, userspace can access kernel objects. They can identify both the object and a set of access rights to the object. With proper rights, one can duplicate objects, pass them across IPC, and obtain handles to child objects. Some of the concerns pointed out in regular handles are: If you have a handle to a job, you can get handle to anything in the job using object_get_child() Leak of root job handle Refining default rights down to least privilege Not all operations check access rights Some rights are unimplemented, currently Resource handles: These are a variant of handles for platform resources such as, memory mapped I/O, I/O port, IRQ, and hypervisor guests. Some of the concerns pointed out in resource handles are: Coarse granularity of root resource checks Leak of root resource handle Refining root resource down to least privilege Job policy: In Fucshia, every process is a part of a job and these jobs can further have child jobs. Job policy is applied to all processes within the job. These policies include error handling behavior, object creation, and mapping of WX memory. Some of the concerns pointed out in job policies are: Write execute (WX) is not yet implemented Inflexible mechanism Refining job policies down to least privilege vDSO (virtual dynamic shared object) enforcement: This is the only way to invoke system calls and is fully read-only. Some of the concerns pointed out in vDSO enforcement are: Potential for tampering with or bypassing the vDSO, for example, processs_writes_memory() allows you to overwrite the vDSO Limited flexibility, for example,  as compared to seccomp Userspace mechanisms Namespaces: It is a collection of objects that you can enumerate and access. Sandboxing: Sandbox is the configuration of a process’s namespace created based on its manifest. Some of the concerns pointed out in namespaces and sandboxing are: Sandbox only for application packages (and not system services) Namespace and sandbox granularity No independent validation of sandbox configuration Currently uses global /data and /tmp To address the aforementioned concerns the researchers suggested a MAC framework. It could help in the following ways: Support finer-grained resource checks Validate namespace/sandbox It could help control propagation, support revocation, apply least privilege Just like in Android, it could provide a unified framework for defining, enforcing, and validating security goals for Fuchsia. This was a sneak peek from the talk. To know more about the architecture, hardware limitations, security features of Zephyr and Fucshia in detail, watch the presentation on YouTube: Security in Zephyr and Fucshia - Stephen Smalley & James Carter, National Security Agency. Cryptojacking is a growing cybersecurity threat, report warns Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 4217

article-image-facebook-and-arm-join-yocto-project-as-platinum-members-for-embedded-linux-development
Natasha Mathur
03 Sep 2018
3 min read
Save for later

Facebook and Arm join Yocto Project as platinum members for embedded Linux development

Natasha Mathur
03 Sep 2018
3 min read
Last week, the Yocto Project announced that Arm and Facebook will be joining the project as new platinum members. The Yocto Project is an open source collaboration project (originally an Intel Project) that was launched back in 2011. It aims to allow developers to create customized Linux-based systems for embedded products. The Yocto Project comes with a flexible set of tools and offers a space where embedded developers across the globe share technologies, software, and best practices. This helps them build tailored Linux images for embedded and Internet of Things (IOT) devices. According to Rhonda Dirvin, Senior Director, Marketing, Embedded & Automotive Line of Business, Arm, “The Yocto Project provides an excellent framework to facilitate embedded Linux development, and through our membership we will collaborate with the community to further advance Yocto Project’s custom open-source distribution.” Earlier, Linaro, which consolidates and optimizes open source software and tools for the Arm architecture, was considered a competitor of Yocto Project. However, that’s not entirely the case as both the groups have become complementary and Linaro’s Arm toolchain can be used within Yocto Project. Facebook's role in the Yocto Project and embedded Linux Facebook's role has been minor when it comes to embedded Linux. Facebook is said to join the Yocto Project either because of a new project or may be Facebook just wanted to expand its open source presence. “The Yocto Project is the basis for important open source and embedded firmware initiatives. We are happy to lend our support to the Yocto Project community, and look forward to joining with other members in this important work”, said Aaron Sullivan, Director of Hardware Engineering at Facebook The Yocto Project currently has more than 22 active members. “We are delighted to welcome Arm and Facebook to the Yocto Project at the Platinum level. With their continued support, we are furthering the embedded systems ecosystem and the Yocto Project as a whole.” mentioned Lieu Ta, Senior Director of Governance and Business Operations at Wind River and Chair of the Yocto Project Advisory Board. Yocto Project seems to be continually growing with Facebook and Arms joining in. Yocto will benefit from Facebook and Arm’s technical and financial support to consolidate it as a “secure, stable and adaptable industry standard”. For more information be sure to check out the official Yocto Project blog post. Read next Arm unveils its Client CPU roadmap designed for always-on, always-connected devices Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban A new conservative employee group within Facebook to protest Facebook’s “intolerant” liberal policies
Read more
  • 0
  • 0
  • 3372
article-image-arm-unveils-its-client-cpu-roadmap-designed-for-always-on-always-connected-devices
Bhagyashree R
22 Aug 2018
3 min read
Save for later

Arm unveils its Client CPU roadmap designed for always-on, always-connected devices

Bhagyashree R
22 Aug 2018
3 min read
Arm, the world’s leading semiconductor IP company, for the first time have disclosed its forward-looking compute performance data and a CPU roadmap for their Client Line of Business from now through 2020. Every year they introduce new world-class CPU designs that have delivered double-digit gains in instructions-per-clock (IPC) performance since 2013. Their aim is to enable the PC industry to overcome their reliance on Moore’s law and deliver a high-performance, always-on, always-connected laptop experience. Key highlights of this client compute CPU roadmap Arm’s client roadmap 2018: Earlier this year, the launch of Cortex-A76 was announced. It delivers laptop-class performance while maintaining the power efficiency of a smartphone. We can expect hearing more on the first commercial devices on 7nm towards the end of the year and coming months. 2019: Arm will be delivering the CPU codenamed ‘Deimos’ to their partners, which is a successor to Cortex-A76. ‘Deimos’ is optimized for the latest 7nm nodes and is based on DynamIQ technology. DynamIQ redefines multi-core computing by combining the big and LITTLE CPUs into a single, fully-integrated cluster with many new and enhanced benefits in power and performance for mobile to infrastructure. With these added improvements, it is expected to deliver a 15+ percent increase in compute performance. 2020: The CPU codenamed ‘Hercules’ will be available to Arm partners. Same as ‘Deimos’, it is also based on DynamIQ technology and will be optimized for both 5nm and 7nm nodes. It is expected to improve power and area efficiency by 10 percent in addition to increase in the compute performance. What does this roadmap tell us? Take advantage of the disruptive innovation 5G will bring to all client devices. The innovations from their silicon and foundry partners will help Arm SoCs (System on Chip) to breakthrough the dominance of x86 and gain substantial market share in Windows laptops and Chromebooks over the next five years. The Arm Artisan Physical IP platform and Arm POP IP will help partners get every bit of performance-per-watt they can out of their SoCs on whatever process node they choose. This latest roadmap highlights that Arm is bringing new innovations and features to the PC industry with its annual cadence design. They will be talking more about their latest product releases and ecosystem developments at Arm TechCon which will be held in October this year. To know more about their CPU roadmap, head over to Arm’s news post. SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel’s Spectre variant 4 patch impacts CPU performance AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs
Read more
  • 0
  • 0
  • 3103

article-image-intel-acquires-easic-a-custom-chip-fpga-maker-for-iot-cloud-and-5g-environments
Kunal Chaudhari
23 Jul 2018
3 min read
Save for later

Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments

Kunal Chaudhari
23 Jul 2018
3 min read
Last week Intel acquired eASIC, a fabless semiconductor company that makes customizable eASIC chips for use in wireless and cloud environments. The actual transaction amount for this merger was not disclosed by Intel. They believe that this acquisition is more “strategic” than just pure business as the competition for FPGAs is booming due to increasing demand for data and cloud services. The rise of FPGAs and Intel’s strategy to diversify beyond CPUs FPGAs were first introduced back in the 80s and were considered as an evolution in the path of fabless semiconductors. With each passing year, researchers have been trying to find innovative solutions to improve system performance, to meet the needs of big data, cloud computing, mobile, networking and other domains. FPGA is at the heart of this quest to develop high performing systems and is being paired with CPU’s to facilitate compute-intensive operations. Intel has a Programmable Solutions Group (PSG), which they created after acquiring Altera in 2015 for $16.7 billion. Altera is considered to be one of the leading FPGA manufacturers. The idea behind the eASIC acquisition is to complement Altera chips with eASIC’s technology. Dan McNamara, corporate vice president and GM of the PSG division mentioned in the official announcement, “We’re seeing the largest adoption of FPGA ever because of explosion of data and cloud services, and we think this will give us a lot of differentiation versus the likes of Xilinx”. Xilinx leads the race in the FPGA market with Intel being a distant second. The acquisition of eASIC is seen as a step towards catching up with the market leaders. Intel’s most recent quarterly earnings reports showed that PSG division had earned $498 million with 17% compound annual growth rate (CAGR), whereas on the other hand the company’s biggest division ‘Client Computing Division (CCG) made $8.2 billion but with a CAGR of 3%. Although PSG’s overall revenue is small when compared to CCG, it shows potential in terms of future growth. Hence Intel plans to increase their investments in acquiring futuristic companies like eASIC. It wouldn’t be surprising that we will see more such acquisitions in the coming years. You can visit Intel’s PSG blog for more interesting news on FPGAs. Frenemies: Intel and AMD partner on laptop chip to keep Nvidia at bay Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs
Read more
  • 0
  • 0
  • 3158