Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - IoT and Hardware

119 Articles
article-image-alibabas-chipmaker-launches-open-source-risc-v-based-xuantie-910-processor-for-5g-ai-iot-and-self-driving-applications
Vincy Davis
26 Jul 2019
4 min read
Save for later

Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications

Vincy Davis
26 Jul 2019
4 min read
Launched in 2018, Alibaba’s chip subsidiary, Pingtouge made a major announcement yesterday. Pingtouge is launching its first product - chip processor XuanTie 910 using the open-source RISC-V instruction set architecture. The XuanTie 910 processor is expected to reduce the costs of related chip production by more than 50%, reports Caixin Global. XuanTie 910, also known as T-Head, will soon be available in the market for commercial use. Pingtouge will also be releasing some of XuanTie 910’s codes on Github for free to help the global developer community to create innovative applications. No release dates have been revealed yet. What are the properties of the XuanTie 910 processor? The XuanTie 910 16-core processor has 7.1 Coremark/MHz and its main frequency can achieve 2.5GHz. This processor can be used to manufacture high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chip (SOC). It can be used in applications like 5G telecommunication, artificial intelligence (AI), and autonomous driving. XuanTie 910 processor gives 40% increased performance over the mainstream RISC-V instructions and also a 20% increase in terms of instructions. According to Synced, Xuantie 910 has two unconventional properties: It has a 2-stage pipelined out-of-order triple issue processor with two memory accesses per cycle. The processors computing, storage and multi-core capabilities are superior due to an increased extension of instructions. Xuantie 910 can extend more than 50 instructions than RISC-V. Last month, The Verge reported that an internal ARM memo has instructed its staff to stop working with Huawei. With the US blacklisting China’s telecom giant Huawei, and also banning any American company from doing business with them, it seems that ARM is also following the American strategy. Although ARM is based in U.K. and is owned by the Japanese SoftBank group, it does have an “US origin technology”, as claimed in the internal memo. This may be one of the reasons why Alibaba is increasing its efforts in developing RISC-V, so that Chinese tech companies can become independent from Western technologies. A Xuantie 910 processor can assure Chinese companies of a stable future, with no fear of it being banned by Western governments. Other than being cost-effective, RISC-V also has other advantages like more flexibility compared to ARM. With complex licence policies and high power prospect, it is going to be a challenge for ARM to compete against RISC-V and MIPS (Microprocessor without Interlocked Pipeline Stages) processors. A Hacker News user comments, “I feel like we (USA) are forcing China on a path that will make them more competitive long term.” Another user says, “China is going to be key here. It's not just a normal market - China may see this as essential to its ability to develop its technology. It's Made in China 2025 policy. That's taken on new urgency as the west has started cutting China off from western tech - so it may be normal companies wanting some insurance in case intel / arm cut them off (trade disputes etc) AND the govt itself wanting to product its industrial base from cutoff during trade disputes” Some users also feel that it is technology that wins when two big economies continue bringing up innovative technologies. A comment on Hacker News reads, “Good to see development from any country. Obviously they have enough reason to do it. Just consider sanctions. They also have to protect their own market. Anyone that can afford it, should do it. Ultimately it is a good thing from technology perspective.” Not all US tech companies are wary of partnering with Chinese counterparts. Two days ago, Salesforce, an American cloud-based software company announced a strategic partnership with Alibaba. This aims to help Salesforce localize their products in mainland China, Hong Kong, Macau, and Taiwan. This will enable Salesforce customers to market, sell, and operate through services like Alibaba Cloud and Tmall. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals The US Justice Department opens a broad antitrust review case against tech giants Salesforce is buying Tableau in a $15.7 billion all-stock deal
Read more
  • 0
  • 0
  • 6910

article-image-apple-advanced-talks-with-intel-to-buy-its-smartphone-modem-chip-business-for-1-billion-reports-wsj
Bhagyashree R
24 Jul 2019
3 min read
Save for later

Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ

Bhagyashree R
24 Jul 2019
3 min read
On Monday, the Wall Street Journal reported that Apple is in advanced talks to buy Intel’s smartphone-modem business for at least $1 billion, citing people familiar with the matter. This Apple-Intel deal that will cover a portfolio of patents and staff, is expected to get confirmed in the next week. According to the report, the companies started discussing this deal last summer around the time Brian Krzanich, Intel’s former CEO resigned. However, the talk broke when Apple signed a multiyear supply agreement for modems with Qualcomm in April to settle a longstanding legal dispute between the companies. The dispute was regarding royalties Qualcomm charges for its smartphone modems. After Apple’s settlement with Qualcomm, Intel announced its plans to exit the 5G smartphone modem business. The company’s new CEO Bob Swan said in a press release that there is no “path to profitability and positive returns” for Intel in the smartphone modem business. Intel then opened this offer to other companies but eventually resumed talks with Apple, who is seen as the “most-logical buyer” for its modem business. How will this deal benefit Apple This move will help Apple jumpstart its efforts to make modem chips in-house. In recent years, Apple has been expanding its presence in the components market to eliminate dependence on other companies for hardware and software in its devices. It now designs its own application processors, graphics chips, Bluetooth chips, and security chips. Last year, Apple acquired patents, assets, and employees from Dialog Semiconductor, a British chipmaker as a part of a 600 million deal to bring power management designs in house. With this deal, the tech giant will get access to Intel’s engineering work and talent to help in the development of modem chips for the crucial next generation of wireless technology known as 5G, potentially saving years of development work. How will this deal benefit Intel This deal will allow Intel to part ways from a business that hasn't been much profitable for the company. “The smartphone operation had been losing about $1 billion annually, a person familiar with its performance has said, and has generally failed to live up to expectations,” the report reads. After its exit from the 5G smartphone modem business, the company wants to put its focus in 5G network infrastructure. Read the full story on the Wall Street Journal. Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Apple gets into chip development and self-driving autonomous tech business
Read more
  • 0
  • 0
  • 2250

article-image-azure-kinect-developer-kit-is-now-generally-available-will-start-shipping-to-customers-in-the-us-and-china
Amrata Joshi
12 Jul 2019
3 min read
Save for later

Azure Kinect Developer Kit is now generally available, will start shipping to customers in the US and China

Amrata Joshi
12 Jul 2019
3 min read
In February, this year, at the Mobile World Congress (MWC), Microsoft announced the $399 Azure Kinect Developer Kit, an all-in-one perception system for computer vision and speech solutions. Recently, Microsoft announced that the kit is generally available and will begin shipping it to customers in the U.S. and China who preordered it.  The Azure Kinect Developer Kit aims to offer developers a platform to experiment with AI tools as well as help them plug into Azure’s ecosystem of machine learning services.  The Azure Kinect DK camera system features a 1MP (1,024 x 1,024 pixel) depth camera, 360-degree microphone, 12MP RGB camera that is used for additional color stream which is aligned to the depth stream, and an orientation sensor. It uses the same time-of-flight sensor that the company had developed for the second generation of its HoloLens AR visor. It also features an accelerometer and gyroscope (IMU) that helps in sensor orientation and spatial tracking. Developers can also experiment with the field of view because of the presence of a global shutter and automatic pixel gain selection. This Kit works with a range of compute types that can be used together for providing a “panoramic” understanding of the environment. This advancement might help Microsoft users in health and life sciences to experiment with depth sensing and machine learning. During the keynote, Microsoft Azure corporate vice president Julia White said, “Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions.”  She further added, “It only makes sense for us to create a new device when we have unique capabilities or technology to help move the industry forward.” Few users are complaining about the product and expecting some changes in the future. They have highlighted issues with the mics, the SDK, the sample code and much more. A user commented on the HackerNews thread, “Then there's the problem that buries deep in the SDK is a binary blob that is the depth engine. No source, no docs, just a black box. Also, these cameras require a BIG gpu. Nothing is seemingly happening onboard. And you're at best limited to 2 kinects per usb3 controller. All that said, I'm still a very happy early adopter and will continue checking in every month or two to see if they've filled in enough critical gaps for me to build on top of.” Few others seem to be excited and think that the camera is good and will be helpful in projects. Another user commented, “This is really cool!” The user further added, “This camera is way better quality, so it'll be neat to see the sort of projects can be done now.” To know more about Azure Kinect Developer Kit, watch the video https://www.youtube.com/watch?v=jJglCYFiodI Microsoft Defender ATP detects Astaroth Trojan, a fileless, info-stealing backdoor Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities    
Read more
  • 0
  • 0
  • 2243
Banner background image

article-image-raspberry-pi-4-has-a-usb-c-design-flaw-some-power-cables-dont-work
Vincy Davis
10 Jul 2019
5 min read
Save for later

Raspberry Pi 4 has a USB-C design flaw, some power cables don't work

Vincy Davis
10 Jul 2019
5 min read
Raspberry Pi 4 was released last month, with much hype and promotions. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, and a USB-C port as a power connector. The USB-C power connector was the first of its kind addition in the Pi 4 board. However, four days after its release, Tyler Ward, an electronics and product engineer disclosed that the new Pi4 is not charging when used with an electronically marked or e-marked USB-C cables, the type used by Apple MacBooks and other laptops. Two days ago, Pi's co-creator Eben Upton also confirmed the same. Upton says that, “A smart charger with an e-marked cable will incorrectly identify the Raspberry Pi 4 as an audio adapter accessory, and refuse to provide power.” Upton adds that the technical breakdown of the underlying issue in the Pi 4's circuitry, by Tyler Ward offers a detailed overview of why e-marked USB-C cables won't power the Pi. According to Ward’s blog, “The root cause of the problem is the shared cc pull down resistor on the USB Type-C connector. By looking at the reduced pi schematics, we can see it as R79 which connects to both the CC lines in the connector.” “With most chargers this won’t be an issue as basic cables only use one CC line which is connected through the cable and as a result the pi will be detected correctly and receive power. The problem comes in with e-marked cables which use both CC connections”, he adds.  Ward has suggested some workarounds for this problem, firstly he recommends to use a non e-marked cable, which most USB-C phone charger cables are likely to have, rather than the e-marked cable. Also, the older chargers with A-C cables or micro B to C adaptors will also work if they provide enough power, as these don’t require CC detection to get charged. The complete solution to this problem would be if Pi would, in a future board revision, add a 2nd CC resistor to the board and fix the problem. Another option is to buy the $8/£8 official Raspberry Pi 4 power supply. In a statement to TechRepublic, Upton adds that “It's surprising this didn't show up in our (quite extensive) field testing program.”  Benson Leung, a Google Chrome OS engineer has also criticized Raspberry Pi in a medium blogpost which he has sarcastically titled,“How to design a proper USB-C™ power sink (hint, not the way Raspberry Pi 4 did it)”. Leung has identified two critical mistakes on Raspberry Pi’s part. He says that Raspberry Pi should have copied the figure from the USB-C Spec exactly, instead of designing a new circuit. Leung says that Raspberry Pi “designed this circuit themselves, perhaps trying to do something clever with current level detection, but failing to do it right.” The second mistake, he says, is that they didn’t actually test their Pi4 design with advanced cables. “The fact that no QA team inside of Raspberry Pi’s organization caught this bug indicates they only tested with one kind (the simplest) of USB-C cables.”, he adds. Many users agreed with Leung and  expressed their own views on the faulty USB-C design on the Raspberry Pi 4. They think it’s hard to believe that Raspberry Pi shipped these models before trying it with a MacBook charger. A user on Hacker News comments, “I find it incredible that presumably no one tried using a MacBook charger before this shipped. If they did and didn't document the shortcoming that's arguably just as bad. Surely a not insignificant number of customers have MacBooks? If I was writing some test specs this use case would almost certainly feature, given the MacBook Pro's USB C adapter must be one of the most widespread high power USB C charger designs in existence. Especially when the stock device does not ship with a power supply, not like it was unforeseeable some customers would just use the chargers they already have.” Some are glad that they have not yet ordered their Raspberry Pi 4 yet. https://twitter.com/kb2ysi/status/1148631629088342017 However, some users believe it’s not that big a deal. https://twitter.com/kb2ysi/status/1148635750210183175 A user on Hacker News comments, “Eh, it’s not too bad. I found a cable that works and I’ll stick to it. Even with previous-gen Pis there was always a bit of futzing with cables to find one that has small enough voltage drop to not get power warnings (even some otherwise “good” cables really cheap out on copper). The USB C thing is still an issue, and I’m glad it’ll be fixed, but it’s really not that big of a deal.” No schedule has been disclosed on the release of the revision by Upton nor Raspberry Pi till now. 10+ reasons to love Raspberry Pi You can now install Windows 10 on a Raspberry Pi 3 Raspberry Pi opens its first offline store in England
Read more
  • 0
  • 0
  • 3030

article-image-apple-is-ditching-butterfly-keyboard-and-opting-for-a-reliable-scissor-switch-keyboard-in-macbook-per-an-apple-analyst
Vincy Davis
05 Jul 2019
4 min read
Save for later

Apple is ditching butterfly keyboard and opting for a reliable scissor switch keyboard in MacBook, per an Apple analyst

Vincy Davis
05 Jul 2019
4 min read
Yesterday, Apple analyst Ming-Chi Kuo in a report to MacRumors, revealed that Apple is going to include a new scissor switch keyboard in the 2019 MacBook Air. The scissor switch keyboard is expected to have glass fiber to increase its durability. This means that Apple will finally do away with the butterfly keyboard, introduced in 2015, which has always been infamous for reliability and key travel issues. The MacBook Pro will also be getting the new scissor switch keyboard, but not until 2020. The scissor-switch keyboard uses a mechanism in which the keys are attached to the keyboard via two plastic pieces that interlock in a "scissors"- like fashion, and snap to the keyboard and the key. In a statement to MacRumors, Kuo says that, “Though the butterfly keyboard is still thinner than the new scissor keyboard, we think most users can't tell the difference. Furthermore, the new scissor keyboard could offer a better user experience and benefit Apple's profits; therefore, we predict that the butterfly keyboard may finally disappear in the long term.” Kuo also states that Apple’s butterfly design was expensive to manufacture due to low yields. The scissor-switch keyboard might be costly than a regular laptop keyboard, but will be cheaper than the butterfly keyboard. The scissor-switch keyboard intends to improve typing experience of Apple users. The existing butterfly keyboard has always been a controversial product, with users complaining about its durability. The butterfly keyboard design is sensitive to dust, with even the slightest particle causing keys to jam and heat issues. Last year, a class action lawsuit was filed against Apple in a federal court in California for allegedly using the flawed butterfly keyboard design in its MacBook variants since 2015. Apple has also released a tutorial on how to clean the butterfly keyboard of the MacBook or MacBook Pro. Apple has also introduced four generations of butterfly keyboards, attempting to address user complaints about stuck keys, repeated key inputs, and even the loud clacking sound of typing when striking each keycap. In March this year, Apple officially apologised for inflicting MacBook owners with its dust-prone, butterfly-switch keyboard. This apology was in response to a critical report by the Wall Street Journal's Joanna Stern about the MacBook's butterfly-switch keyboard, which can make typing the E, R, and T keys a nightmare when writing. The new scissor-switch keyboard is thus expected to be a big sigh of relief to all MacBook customers. The new scissor-switch keyboard is the same keyboard mechanism that was present in all pre-2015 MacBooks and was quite well-received by the MacBook users back then. Though the new model is expected to be a more meaningful evolution of the previous product. Kuo says the new replacement keyboard will be supplied solely by specialist laptop keyboard maker Sunrex rather than Wistron, which currently makes the butterfly keyboards for Apple. The analyst expects the new Sunrex keyboard will go for mass production in 2020 and will make the Taiwan-based firm Apple's most important keyboard supplier. Users are relieved that Apple has finally decided to ditch the butterfly keyboard. https://twitter.com/alon_gilboa/status/1146797852242448385 https://twitter.com/danaiciocan/status/1146772468432023553 https://twitter.com/najeebster/status/1146708948139106305 A user on Hacker News says that, “Finally! It took four years to admit there is something wrong. And one more year to change upcoming laptops. It‘s unbelievable how this crap could be released. Coming from a ThinkPad to an MBP in 2015 I was disappointed by the keyboard of the MBP 2015. Then switching to an MBP 2018 I was shocked how much worse things could get” Almost all of Apple’s iCloud services were down for some users, most of yesterday; now back to operation OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Apple gets into chip development and self-driving autonomous tech business
Read more
  • 0
  • 0
  • 2136

article-image-apple-gets-into-chip-development-and-self-driving-autonomous-tech-business
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Apple gets into chip development and self-driving autonomous tech business

Amrata Joshi
28 Jun 2019
3 min read
Apple recently hired Mike Filippo, lead CPU architect and one of the top chip engineers from ARM Holdings, which is a semiconductor and software design company. According to Mike Filippo’s updated profile in LinkedIn, he joined Apple in May as the architect and is working out of the Austin, Texas area.  He worked at ARM for ten years as the lead engineer for designing the chips used in most smartphones and tablets. Previously, he had also worked as the key designer at chipmakers Advanced Micro Devices and Intel Corp.  In a statement to Bloomberg, a spokesman from ARM said, “Mike was a long-time valuable member of the ARM community.” He further added, “We appreciate all of his efforts and wish him well in his next endeavor.” Apple’s A series chips that are used in the mobile devices use ARM technology. For almost two decades, the Mac computers had Intel processors. Hence, Filippo’s experience in these companies could prove to be a major plus point for Apple. Apple had planned to use its own chips in Mac computers in 2020, and further replace processors from Intel Corp with ARM architecture based processors.  Apple also plans to expand its in-house chip making work to new device categories like a headset that meshes augmented and virtual reality, Bloomberg reports. Apple acquires Drive.ai, an autonomous driving startup Apart from the chip making business there are reports of Apple racing in the league of self-driving autonomous technology. The company had also introduced its own self-driving vehicle called Titan, which is still a work in progress project.  On Wednesday, Axios reported that Apple acquired Drive.ai, an autonomous driving startup valued at $200 million. Drive.ai was on the verge of shutting down and was laying off all its staff. This news indicates that Apple is interested in tasting the waters of the self-driving autonomous technology and this move might help in speeding up the Titan project. Drive.ai was in search of a buyer since February this year and had also communicated with many potential acquirers before getting the deal cracked by Apple. The company also purchased Drive.ai's autonomous cars and other assets. The amount for which Apple has acquired Drive.ai is yet not disclosed, but as per a recent report, Apple was expected to pay an amount lesser than the $77 million invested by venture capitalists. The company has also hired engineers and managers from  Waymo and Tesla. Apple has recruited around five software engineers from Drive.ai as per a report from the San Francisco Chronicle. It seems Apple is mostly hiring people that are into engineering and product design. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!  
Read more
  • 0
  • 0
  • 2026
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-debian-gnu-linux-port-for-risc-v-64-bits-why-it-matters-and-roadmap
Amrata Joshi
20 Jun 2019
7 min read
Save for later

Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap

Amrata Joshi
20 Jun 2019
7 min read
Last month, Manuel A. Fernandez Montecelo, a Debian contributor and developer talked about the Debian GNU/Linux riscv64 port at the RISC-V workshop. Debian, a Unix-like operating system consists of free software supported by the Debian community that comprises of individuals who basically care about free and open-source software. The goal of the Debian GNU/Linux riscv64 port project has been to have Debian ready for installation and running on systems that implement a variant of the RISC-V (an open-source hardware instruction set architecture) based systems. The feedback from the people regarding his presentation at the workshop was positive. Earlier this week,  Manuel A. Fernandez Montecelo announced an update on the status of Debian GNU/Linux riscv64 port. The announcement comes weeks before the release of buster which will come with another set of changes to benefit the port. What is RISC-V used for and why is Debian interested in building this port? According to the Debian wiki page, “RISC-V (pronounced "risk-five") is an open source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles. In contrast to most ISAs, RISC-V is freely available for all types of use, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open ISA, it is significant because it is designed to be useful in modern computerized devices such as warehouse-scale cloud computers, high-end mobile phones and the smallest embedded systems. Such uses demand that the designers consider both performance and power efficiency. The instruction set also has a substantial body of supporting software, which fixes the usual weakness of new instruction sets. In this project the goal is to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA: Software-wise, this port will target the Linux kernel Hardware-wise, the port will target the 64-bit variant, little-endian This ISA variant is the "default flavour" recommended by the designers, and the one that seems to attract more interest for planned implementations that might become available in the next few years (development boards, possible consumer hardware or servers).” Update on Debian GNU/Linux riscv64 port Image source: Debian Let’s have a look at the graph where the percent of arch-dependent packages that are built for riscv64 (grey line) has been around or higher than 80% since mid-2018. The arch-dependent packages are almost half of Debian's [main, unstable] archive. It means that the arch-independent packages can be used by all the ports, provided that the software is present on which they rely on. The update also highlights that around 90% of packages from the whole archive has been made available for this architecture. Image source: Debian The graph above highlights that the percentages are very stable for all architectures. Montecelo writes, “This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems).” Even the second-class ports appear to be stable. Montecelo writes, “Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things just work.” According to him, apart from the work of porters themselves, there are people working on bootstrapping issues that make it easier to bring up ports, better than in the past. They also make coping better when toolchain support or other issues related to ports, blow up. He further added, “And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for the needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.” Future scope and improvements yet to come To get Debian running on RISC-V will not be easy because of various reasons including limited availability of hardware being able to run Debian port and limited options for using bootloaders. According to Montecelo, this is an area of improvement from them. He further added, “Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.” Presently, they are beyond 500 packages from the Rust ecosystem in the archive (which is about 4%) which can’t be built and used until Rust gets support for the architecture. Rust requires LLVM and there’s no Rust compiler based on GCC or other toolchains. Montecelo writes, “Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term." Apart from Rust, other packages use LLVM to some extent, but currently, it is not fully working for riscv64. The support of LLVM for riscv64 is expected to be completed this year. While talking about other programming languages, he writes, “There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is a long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.” Why are people excited about this? Many users seem to be excited about the news, one of the reasons being that there won’t be a need to bootstrap from scratch as Rust now will be able to cross-compile easily because of the Riscv64 support. A user commented on HackerNews, “Debian Rust maintainer here. We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.” Also, this appears to be a good news for Debian, as cross-compiling has really come a long way on Debian. Rest are awaiting for more to get incorporated with riscv. Another user commented, “I am waiting until the Bitmanip extension lands to get excited about RISC-V: https://github.com/riscv/riscv-bitmanip” Few others think that there is a need for LLVM support for riscv64. A user commented, “The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.” Another comment reads, “Basically LLVM is now a dependency of equal importance to GCC for Debian. Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.” According to users, the architecture of this port misses on two major points, one being the support for LLVM compiler and the other one being the support for Rust based on GCC. If the port gets the LLVM support by this year, users will be able to develop a front end for any programming language as well as a backend for any instruction set architecture. Now, if we consider the case of support for Rust based on GCC, then the port will help developers to get support for many language extensions as GCC provides the same. A user commented on Reddit, “The main blocker to finish the port is having a working Rust toolchain. This is blocked on LLVM support, which only supports RISCV32 right now, and RISCV64 LLVM support is expected to be finished during 2019.” Another comment reads, “It appears that enough people in academia are working on RISCV for LLVM to accept it as a mainstream backend, but I wish more stakeholders in LLVM would make them reconsider their policy.” To know more about this news, check out Debian’s official post. Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now? Are Debian and Docker slowly losing popularity?  
Read more
  • 0
  • 0
  • 4910

article-image-worlds-first-touch-transmitting-telerobotic-hand-debuts-at-amazon-remars-tech-showcase
Fatema Patrawala
04 Jun 2019
3 min read
Save for later

World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase

Fatema Patrawala
04 Jun 2019
3 min read
Ana Holdings Inc., HaptX, SynTouch, and Shadow Robot Company are set to unveil the next generation of robotics technology at the upcoming Amazon Re:Mars Expo. The Amazon Re:Mars will be held in Las Vegas from June 4th and 7th. By incorporating the latest tech from across the field of robotics, they have invented the teleoperation and telepresence system. This system will feature the first robotic hand to successfully transmit touch sensations. This technology is being hailed as the ‘Holy Grail of robotics’ by the inventors. They have combined Shadow Robot’s world-leading dexterous robotic hand with SynTouch’s biomimetic tactile sensors and HaptX’s realistic haptic feedback gloves. This new technology enables unprecedented precision remote-control of a robotic hand. They have also implemented tests on it and in a recent test, a human operator in California was able to operate a computer keyboard in London, with each keystroke detected through fingertip sensors on their glove and faithfully relayed 5000 miles to the Dexterous Hand to recreate. They also promise that combining touch with teleoperation in this way is ground-breaking for future applications to perform actions at a distance, e.g. bomb disposal, deep-sea engineering or even surgery performed across different states. At the Amazon re:MARS Tech Showcase, the trailblazing team will demonstrate their teleoperation and telepresence technology outside the lab for the first time. Check out this video to understand how this technology will function. Kevin Kajitani, Co-Director of ANA HOLDINGS INC., Avatar Division says, “We are only beginning to scratch the surface of what is possible with these advanced Avatar systems and through telerobotics in general. In addition to sponsoring the $10M ANA Avatar XPRIZE, we’ve approached our three partner companies to seek solutions that will allow us to develop a high performance, intuitive, general-purpose Avatar hand. We believe that this technology will be key in helping humanity connect across vast distances.” Jake Rubin, Founder and CEO of HaptX says, “Our sense of touch is a critical component of virtually every interaction. The collaboration between HaptX, Shadow Robot Company, SynTouch, and ANA brings a nat-ural and realistic sense of touch to robotic manipulation for the first time, eliminating one of the last bar-riers to true telepresence.” Dr. Jeremy Fishel, Co-Founder of SynTouch says, “We’ve got something exciting up our sleeves for re:MARS this year. Users will see just how essential the sense of touch is when it comes to dexterity and manipulation and the various applications it can have within industry.” Rich Walker, Managing Director of the Shadow Robot Company says, “Our remotely controlled system can help transform work within risky environments such as nuclear decommissioning and we’re already in talks with the UK nuclear establishment regarding the application of this advanced technology. It adds a layer of safety between the worker and the radiation zone as well as increasing precision and accuracy within glovebox-related tasks.” Paul Cutsinger, Head of Voice Design Education at Amazon Alexa says, “re:MARS embraces an optimistic vision for scientific discovery to advance a golden age of innovation and this teleoperation technology by the Shadow Robot Company, SynTouch and HaptX more than fits the bill. It must be seen.” Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Artist Holly Herndon releases an album featuring an artificial intelligence ‘musician’ Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry
Read more
  • 0
  • 0
  • 3015

article-image-arm-announces-new-cpu-and-gpu-chipsets-designs-mali-g77-gpu-cortex-a77-cpu-and-much-more
Amrata Joshi
28 May 2019
3 min read
Save for later

Arm announces new CPU and GPU chipsets designs, Mali-G77 GPU, Cortex-A77 CPU, and much more!

Amrata Joshi
28 May 2019
3 min read
Yesterday, Arm, the company that has its basic chip architecture utilized by most of the smartphones, announced new designs for its premium CPU and GPU chipsets. The first actual chips are expected before the end of the year.  The company also announced the Mali-G77 GPU, the Cortex-A77 CPU, and an energy efficient machine learning processor. https://twitter.com/Arm/status/1133029847637344256 Cortex-A77 CPU With every new generation of Arm CPUs, the Cortex A77 promises to be more power efficient and provide better raw processing performance. Cortex-A77 has been built to fit in smartphone power budgets and for improving performance. It is the second generation design that brings in substantial performance upgrade over Cortex-A76. Cortex A77 has been built for next-generation laptops and smartphones and for supporting upcoming use cases like advanced ML. It will also support the range of 5G-ready devices that are set to come to the market following the 5G rollout in 2019. Due to the combination of hardware and software optimizations, the Cortex-A77 now brings better machine learning performance. It comes with more than 20 percent integer performance, more than 35 percent FP performance and more than 15 percent more memory bandwidth improvements. Mali-G77 GPU The company brings the new Mail-G77 GPU architecture, which is the first one to be based on the company’s Valhall GPU design. It offers around 1.4x performance improvement over the G76. Mail-G77 GPU is also 30 percent more energy efficient and 60% faster at running machine learning inference and neural net workloads. Mali-G77 provides uncompromising graphics performance and brings performance improvements to complex AR and ML for driving future use cases. https://twitter.com/Arm/status/1132992854282915841 Machine learning processor Arm already offers Project Trillium, its heterogeneous machine learning compute platform for the machine learning processor. Arm has improved the energy efficiency by 2x and scaled performance up to 8 cores and 32 TOP/s since the announcement of Trillium last year. The machine learning processor is based on a new architecture that targets connected devices such as augmented and virtual reality (AR/VR) devices, smartphones, smart cameras, and drones, as well as medical and consumer electronics. This processor processes a variety of neural networks such as convolutional (CNNs) and recurrent (RNNs), for image enhancements, classification, object detection, speech recognition, and natural language understanding. It also minimizes system memory bandwidth through various compression technologies. Read Also: Snips open sources Snips NLU, its Natural Language Understanding engine The company announced, “Every new smartphone experience begins with more hardware performance and features to enable developers to unleash further software innovation.” The company further added, “For developers, the CPU is more critical than ever as it not only handles general-compute tasks, as well as much of the device’s ML, compute which must scale beyond today’s limits. The same holds true for more immersive untethered AR/VR applications, and HD gaming on the go.” To know more about this news, check out Arm community’s post. Intel discloses four new vulnerabilities labeled MDS attacks affecting Intel chips The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell  
Read more
  • 0
  • 0
  • 1857

article-image-amazon-to-roll-out-automated-machines-for-boxing-up-orders-thousands-of-workers-job-at-stake
Amrata Joshi
14 May 2019
3 min read
Save for later

Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake

Amrata Joshi
14 May 2019
3 min read
Recently Amazon has shown tremendous growth in bringing in automation to warehouses. But now it seems Amazon is taking up automation and AI on another level with respect to bringing in technologies for replacing manual work. Last year, Amazon started incorporating technology to a handful of warehouses to scan goods coming down a conveyor belt. Amazon is now all set to roll out its specially-made automated machines that are capable of boxing up orders, overtaking a manual job which is currently held by thousands of workers, Reuters reports. Currently, the company has considered installing two machines at more than a dozen warehouses, while removing at least 24 job roles at each one of them. This set-up usually involves around more than 2,000 people. And if automation will be implemented it will lead to more than 1,300 job cuts across 55 U.S. fulfillment centers for standard-sized inventory. But the company is expecting to recover the cost of machines in two years, at around $1 million per machine plus operational expenses. Though the changes regarding the machines haven’t been finalized yet because vetting the technology might take some more time. In a statement to Reuters, an Amazon spokesperson said, “We are piloting this new technology with the goal of increasing safety, speeding up delivery times and adding efficiency across our network. We expect the efficiency savings will be re-invested in new services for customers, where new jobs will continue to be created.” Boxing multiple orders per minute over 10 hours is a very difficult job. The latest machines, known as the CartonWrap from CMC Srl, an Italian firm, pack the boxes much faster than humans. They can manage to pack 600 to 700 boxes per hour, or four to five times the rate of a human packer. The employees of the company might be trained to take up more technical roles. According to a spokesperson from Amazon, the company is not just aiming at speeding up the process but its aim is to work on efficiency and savings. An Amazon spokesperson said, “It’s truly about efficiency and savings.” But Amazon’s hiring deals with governments have a different story to tell! The company announced 1,500 jobs last year in Alabama, and the state had promised Amazon $48.7 million in return over 10 years. Well, Amazon is not the only one in this league of automation, as even Walmart has plans to deploy thousands of robots for lower level jobs in its 348 stores in US. Walmart aims to bring in autonomous floor cleaners, shelf-scanners, conveyor belts, and “pickup towers” on stores. Looking at the pace where companies like Amazon and Walmart are heading to implement technology in retail space, the future for advanced tech-enabled warehouses are near. But this will be at the cost of jobs of the existing workers who will be at stake. Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon introduces S3 batch operations to process millions of S3 objects Amazon finally agrees to let shareholders vote on selling facial recognition software
Read more
  • 0
  • 0
  • 2210
article-image-linux-forms-urban-computing-foundation-open-source-tools-build-autonomous-vehicles-smart-infrastructure
Fatema Patrawala
09 May 2019
3 min read
Save for later

Linux forms Urban Computing Foundation: Set of open source tools to build autonomous vehicles and smart infrastructure

Fatema Patrawala
09 May 2019
3 min read
The Linux Foundation, nonprofit organization enabling mass innovation through open source, on Tuesday announced the formation of the Urban Computing Foundation (UCF). UCF will accelerate open source software to improve mobility, safety, road infrastructure, traffic congestion and energy consumption in connected cities. UCF’s mission is to enable developers, data scientists, visualization specialists and engineers to improve urban environments, human life quality, and city operation systems to build connected urban infrastructure. The founding members of UCF are Facebook, Google, IBM, UC San Diego, Interline Technologies, Uber etc. The executive director of Linux Foundation, Jim Zemlin spoke to Venturebeat, and said the Foundation will adopt an open governance model developed by the Technical Advisory Council (TAC), which will include technical and IP stakeholders in urban computing who’ll guide its work through projects by review and curation. The intent, added Zemlin, is to provide platforms to developers who seek to address traffic congestion, pollution, and other problems plaguing modern metros. Here’s the list of TAC members: Drew Dara-Abrams, principal, Interline Technologies Oliver Fink, director Here XYZ, Here Technologies Travis Gorkin, engineering manager of data visualization, Uber Shan He, project leader of Kepler.gl, Uber Randy Meech, CEO, StreetCred Labs Michal Migurski, engineering manager of spatial computing, Facebook Drishtie Patel, product manager of maps, Facebook Paolo Santi, senior researcher, MIT Max Sills, attorney, Google On Tuesday, Facebook announced their participation as a founding member of the Urban Computing Foundation (UCF). https://twitter.com/fb_engineering/status/1125783991452180481 Facebook mentions in its post that, “We are using our expertise — including a predictive model for mapping electrical grids, disaster maps , and more accurate population density maps — to improve access to this type of technology”. Further Facebook mentions that UCF will establish a neutral space for the critical work. It will include adapting geospatial and temporal machine learning techniques for urban environments and developing simulation methodologies for modeling and predicting citywide phenomena. Uber also reported about their joining and announced their contribution of Kepler.gl as the initiative’s first official project. Kepler is Uber’s open source, no-code geospatial analysis tool for creating large-scale data sets. It was released in 2018, and is currently used by Airbnb, Atkins Global, Cityswifter, Lime, Mapbox, Sidewalk Labs, and UBILabs, among others to generate visualizations of location data. While all of this set a path towards making of smarter cities, it also raises an alarm to another way of violating privacy and mishandling user data as per the history in tech. Moreover when recently Amnesty International in Canada regarded the Google Sidewalk Labs project in Toronto to normalize mass surveillance and a direct threat to human rights. Questions are raised as to the tech companies forming foundation to address traffic congestion issue but not to address the privacy violation or online extremism. https://twitter.com/shannoncoulter/status/1126199285530238976 The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration Mapzen, an open-source mapping platform, joins the Linux Foundation project Uber becomes a Gold member of the Linux Foundation
Read more
  • 0
  • 0
  • 2726

article-image-google-to-kill-another-product-the-works-with-nest-api-in-the-wake-of-bringing-all-smart-home-products-under-google-nest
Bhagyashree R
09 May 2019
5 min read
Save for later

Google to kill another product, the 'Works with Nest' API in the wake of bringing all smart home products under "Google Nest"

Bhagyashree R
09 May 2019
5 min read
Update: Included Google’s recent plan of action after facing backlash by Nest users.   At this year’s Google I/O developer conference, Google announced that it is bringing all the Nest and Google Home products under one brand “Google Nest”. As a part of this effort, Nest announced on Tuesday that it will be discontinuing the Works with Nest API by August 30, 2019, in favor of Works with Google Assistant. “We want to unify our efforts around third-party connected home devices under a single developer platform – a one-stop shop for both our developers and our customers to build a more helpful home. To accomplish this, we’ll be winding down Works with Nest on August 31, 2019, and delivering a single unified experience through the Works with Google Assistant program,” wrote Nest in a post. Google with this change aims to make the whole smart home experience for users more secure and unified. Over the next few months, users with Nest accounts will need to migrate to Google Accounts, which will serve as a single front-end for using products across Nest and Google. Along with providing a unified experience, Google also promises to be transparent about the data it collects, which it mentioned in an extensive document published on Tuesday. The document titled “Google Nest commitment to privacy in the home” describes how its connected smart home devices work and also lays out Google’s approach for managing user data. Though Google is promising improved security and privacy with this change, this will also end up breaking some existing third-party integrations. And, one of them is IFTTT (If This, Then That), a software platform with which you can write “applets” that allow devices from different manufacturers to talk to each other. We can use IFTTT for things like automatically adjusting the thermostat when the user comes closer to their house based on their phone location, turning Philips Hue smart lights on when a Nest Cam security camera detects motion, and more. Developers who work with Works with Nest API are recommended to visit the Actions on Google Smart Home developer site to learn how to integrate smart home devices or services with the Google Assistant. What Nest users think about this decision? Though Google is known for its search engine and other online services, it is also known for abandoning and killing its products in a trice. This decision of phasing out Works with Nest has left many users infuriated who have brought Nest products. https://twitter.com/IFTTT/status/1125930219305615360 “The big problem here is that there are a lot of people that have spent a lot of money on buying quality hardware that isn't just for leisure, it's for protection. I'll cite my 4 Nest Protects and an outdoor camera as an example. If somehow they get "sunsetted" due to some Google whim, fad or Because They Can, then I'm going to be pretty p*ssed, to say the least. Based on past experience I don't trust Google to act in the users' interest,” said one Hacker News user. Some other users think that this change could be for better, but the timeline that Google has decided is pretty stringent. A Hacker News user commented on a discussion triggered by this news, “Reading thru it, it is not as brutal as it sounds, more than they merged it into the Google Assistant API, removing direct access permission to the NEST device (remember microphone-gate with NEST) and consolidating those permissions into Assistant. Whilst they are killing it off, they have a transition. However, as far as timelines go - August 2019 kill off date for the NEST API is brutal and not exactly the grace period users of connected devices/software will appreciate or in many cases with tech designed for non-technical people - know nothing until suddenly in August find what was working yesterday is now not working.” Google’s reaction to the feedback by Nest users As a response to the backlash by Nest users, Google published a blog post last week sharing its plan of action. According to this plan, users’ existing devices and integrations will continue to work with their Nest accounts. However, they will not have access to any new features that will be available through their Google account. Google further clarified that it will stop taking any new Works with Nest connection requests from August 31, 2019. “Once your WWN functionality is available on the WWGA platform you can migrate with minimal disruption from a Nest Account to a Google Account,” the blog post reads. Though Google did share its plans regarding the third-party integrations, it was pretty vague about the timelines. It wrote, “One of the most popular WWN features is to automatically trigger routines based on Home/Away status. Later this year, we'll bring that same functionality to the Google Assistant and provide more device options for you to choose from. For example, you’ll be able to have your smart light bulbs automatically turn off when you leave your home.” It further shared that it has teamed up with Amazon and other partners for bringing custom integrations to Google Nest. Read the official announcement on Nest’s website. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly What if buildings of the future could compute? European researchers make a proposal. Google to allegedly launch a new Smart home device
Read more
  • 0
  • 0
  • 5367

article-image-introducing-open-eye-msa-consortium-by-industry-leaders-for-targeting-high-speed-optical-connectivity-applications
Amrata Joshi
09 May 2019
3 min read
Save for later

Introducing Open Eye MSA Consortium by industry leaders for targeting high-speed optical connectivity applications

Amrata Joshi
09 May 2019
3 min read
Yesterday, the Open Eye Consortium announced the establishment of its Multi-Source Agreement (MSA) to standardize advanced specifications for optical modules. These specifications are for lower latency, efficient and lower cost optical modules that target 50Gbps, 100Gbps, 200Gbps, and up to 400Gbps optical modules for datacenter interconnects over single-mode and multimode fiber. The formation of the Open Eye MSA was initiated by MACOM and Semtech Corporation with 19 current members in Promoter and Contributing membership classes. The initial specification release of this MSA is planned for Fall 2019 followed by product availability later in the year. Open Eye MSA aims towards the adoption of PAM-4 optical interconnects scaling to 50Gbps, 100Gbps, 200Gbps, and 400Gbps by expanding the existing standards. This will help optical module implementations use less complex, lower cost, lower power, and optimized clock and data recovery (CDR). The Open Eye MSA is investing in the development of an industry-standard optical interconnect that would bring interoperability among a broad group of industry-leading technology providers, including providers of lasers, electronics, and optical components. MSA consortium’s approach enables users to scale to next-generation baud rates. Dale Murray, Principal Analyst at LightCounting, said, “LightCounting forecasts that sales of next-generation Ethernet products will exceed $500 million in 2020. However, this is only possible if suppliers can meet customer requirements for cost and power consumption. The new Open Eye MSA addresses both of these critical requirements. Having low latency is an extra bonus that HPC and AI applications will benefit from.” The initial Open Eye MSA specification will be focused on 53Gbps per lane PAM-4 solutions for 50G SFP, 100G DSFP, 100G SFP-DD, 200G QSFP, and 400G QSFP-DD and OSFP single-mode modules. The subsequent specifications will aim at multimode and 100Gbps per lane applications. David (Chan Chih) Chen, AVP, Strategic Marketing for Transceiver, AOI, said, “Through its participation in the Open Eye MSA, AOI is leveraging our laser and optical module technology to deliver benefits of low cost, high-speed connectivity to next-generation data centers.” Jeffery Maki, Distinguished Engineer II, Juniper Networks, said, “As a leader in switching, routing and optical interconnects, Juniper Networks has a unique perspective into the technology and market dynamics affecting enterprise, cloud and service provider data centers, and the Open Eye MSA provides a forum to apply our insight and expertise on the pathway to 200G and faster connectivity speeds.” To know more about this news, check out, Open Eye MSA’s page. Understanding network port numbers, TCP, UDP, and ICMP on an operating system The FTC issues orders to 7 broadband companies to analyze ISP privacy practices given they are also ad-support content platforms Using statistical tools in Wireshark for packet analysis [Tutorial]  
Read more
  • 0
  • 0
  • 1533
article-image-google-i-o-2019-flutter-ui-framework-now-extended-for-web-embedded-and-desktop
Sugandha Lahoti
08 May 2019
4 min read
Save for later

Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop

Sugandha Lahoti
08 May 2019
4 min read
At the ongoing 2019 Google I/O, Google made a major overhaul to its Flutter UI framework. Flutter is now expanded from mobile to multi-platform. The company released the first technical preview of Flutter for web. The core framework for mobile devices was also upgraded to Flutter 1.5. For desktop, Flutter is being used as an experimental project. It is not production-ready, but the team has published early instructions for developing  apps to run on Mac, Windows, and Linux. An embedding API for Flutter is also available that allows it to be used in scenarios for home and automotives. Google notes, “The core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.” Flutter for Web Flutter for web allows web-based applications to be built using the Flutter framework. Per Google, with Flutter for web you can create “highly interactive, graphically rich content,” though it plans to continue evolving this version with a “focus on performance and harmonizing the codebase.” It allows developers to compile existing Flutter code written in Dart into a client experience that can be embedded in the browser and deployed to any web server. Google teamed up with the New York Times to build a small puzzle game called Kenken as an early example of what can be built using Flutter for Web. This game uses the same code across Android, iOS, the web and Chrome OS. Source: Google Blog Flutter 1.5 Flutter 1.5 hosts a variety of new features including updates to its iOS and Material widget and engine support for new mobile devices types. The latest release also brings support for Dart 2.3 with extensive UI-as-code functionality. It also has an in-app payment library which will make monetizing Flutter based apps easier. Google also showcased an ML Kit Custom Image Classifier, built using Flutter and Firebase at Google I/O 2019. The kit offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app. Google has also released a comprehensive new training course for Flutter, built by The App Brewery. Their new course is available for a time-limited discount from $199 to just $10. Netizens had trouble acknowledging Google’s move and were left wondering as to whether Google wants people to invest in learning Dart or Kotlin. For reference, Flutter is entirely built in Dart and Google made two major announcements for Kotlin at the Google I/O. Android development will become increasingly Kotlin-first, and Google announcing the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers. A comment on Hacker News reads, “This is massively confusing. Do we invest in Kotlin ...or do we invest in Dart? Where will Android be in 2 years: Dart or Kotlin?” In response to this, another comment reads, “I don't think anyone has a definite answer, not even Google itself. Google placed several bets on different technologies and community will ultimately decide which of them is the winning one. Personally, I think native Android (Kotlin) and iOS (Swift) development is here to stay. I have tried many cross-platform frameworks and on any non-trivial mobile app, all of them cause more problem than they solve.” Another said, “If you want to do android development, Kotlin. If you want to do multi-platform development, flutter.” “Invest in Kotlin. Kotlin is useful for Android NOW. Whenever Dart starts becoming more mainstream, you'll know and have enough time to react to it”, was another user’s opinion. Read the entire conversation on Hacker News. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 You can now permanently delete your location history and web and app activity data on Google Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with a focus on AI and developer productivity
Read more
  • 0
  • 0
  • 6932

article-image-apple-convincingly-lobbied-against-right-to-repair-bill-in-california-citing-consumer-safety-concern
Amrata Joshi
03 May 2019
3 min read
Save for later

Apple convincingly lobbied against ‘right to repair’ bill in California citing consumer safety concern

Amrata Joshi
03 May 2019
3 min read
Apple is known for designing its products in a way that except for Apple experts none can easily repair them in case of any issues. For this, it seems the company is trying hard to kill the ‘Right To Repair’ bill in California which might work against Apple. The ‘Right To Repair’ bill which has been adopted by 18 states, is currently under discussion in California. According to this bill,  consumers will get the right to fix or mod their devices without any effect on their warranty. The company has managed to lobby California lawmakers and pushed the bill till 2020. https://twitter.com/kaykayclapp/status/1123339532068253696 In a recent report by Motherboard, an Apple representative and a lobbyist has been privately meeting with legislators in California to encourage them to go off the bill. The company is doing so by stoking fears of battery explosions for the consumers who attempt to repair their iPhones. The Apple representative argued that the consumers might hurt themselves if they accidentally end up puncturing the flammable lithium-ion batteries in their phones. In a statement to The Verge, California Assemblymember Susan Talamantes Eggman, who first introduced the bill in March 2018 and again in March 2019, said, “While this was not an easy decision, it became clear that the bill would not have the support it needed today, and manufacturers had sown enough doubt with vague and unbacked claims of privacy and security concerns.” Last quarter, Apple’s iPhone sales slowed down so the company anticipates that consumers may buy new handsets instead of getting the old one repaired. But the fact that the batteries might get punctured might bother many and will surely have enough speculations around it. Kyle Wiens, iFixit co-founder laughs at the fact about getting an iPhone battery punctured during a repair. Though he admits the possibility but according to him, it rarely happens. Wiens says, “Millions of people have done iPhone repairs using iFixit guides, and people overwhelmingly repair these phones successfully. The only people I’ve seen hurt themselves with an iPhone are those with a cracked screen, cutting their finger.” He further added, “Whether it uses gasoline or a lithium-ion battery, most every car has a flammable liquid inside. You can also get badly hurt if you’re changing a tire and your car rolls off the jack.” But a recent example from David Pierce, WSJ tech reviewer, justifies the explosion. https://twitter.com/pierce/status/1113242195497091072 With so much talk around repairing and replacing, it’s difficult to predict if the ‘Right to Repair’ bill with respect to iPhones, will come in force anytime soon. Only in 2020 we will get a clearer picture of the bill. Also, we will come to know if consumer safety is at stake or is it related to the company benefits. Apple plans to make notarization a default requirement in all future macOS updates Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple officially cancels AirPower; says it couldn’t meet hardware’s ‘high standards’
Read more
  • 0
  • 0
  • 2437