Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - IoT and Hardware

119 Articles
article-image-according-to-a-report-microsoft-plans-for-new-4k-webcams-featuring-facial-recognition-to-all-windows-10-devices-in-2019
Amrata Joshi
27 Dec 2018
3 min read
Save for later

According to a report, Microsoft plans for new 4K webcams featuring facial recognition to all Windows 10 devices in 2019

Amrata Joshi
27 Dec 2018
3 min read
Microsoft plans to introduce two new webcams next year. One feature is designed to extend Windows Hello facial recognition to all the Windows 10 PCs. The other feature will work with the Xbox One, bringing back the Kinect feature that let users automatically sign in by moving in front of the camera. These webcams will be working with multiple accounts and family members. Microsoft is also planning to launch its Surface Hub 2S in 2019, an interactive, digital smart board for the modern workplace that features a USB-C port and upgradeable processor cartridges. PC users have relied on alternatives from Creative, Logitech, and Razer to bring facial recognition to desktop PCs. The planned webcams will be linked to the USB-C webcams that would ship with the Surface Hub 2, whichwill be launched next year. Though the Surface Hub 2X is expected in 2020. In an interview with The Verge in October, Microsoft Surface Chief, Panos Panay suggested that Microsoft could release USB-C webcam soon. “Look at the camera on Surface Hub 2, note it’s a USB-C-based camera, and the idea that we can bring a high fidelity camera to an experience, you can probably guess that’s going to happen,” hinted Panos in October. A camera could possibly be used to extend experience beyond its own Surface devices. The camera for Windows 10, for the first time, will bring facial recognition to all Windows 10 PCs. Currently, Windows Hello facial recognition is restricted to the built-in webcams just like the ones on Microsoft's Surface devices. According to  Windows watcher Paul Thurrott, Microsoft is making the new 4K cameras for Windows 10 PCs and its gaming console Xbox One. The webcam will return a Kinect-like feature to the Xbox One which will allow users to authenticate by putting their face in front of the camera. With the recent Windows 10 update, Microsoft enabled WebAuthn-based authentication, that helps in signing into its sites such as Office 365 with Windows Hello and security keys. The Windows Hello-compatible webcams and FIDO2, a password-less sign in with Windows Hello at the core, will be launched together next year. It would be interesting to see how the new year turns out to be for Microsoft and its users with the major releases. Microsoft urgently releases Out-of-Band patch for an active Internet Explorer remote code execution zero-day vulnerability NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 2693

article-image-ieee-computer-society-predicts-top-ten-tech-trends-for-2019-assisted-transportation-chatbots-and-deep-learning-accelerators-among-others
Natasha Mathur
21 Dec 2018
5 min read
Save for later

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Natasha Mathur
21 Dec 2018
5 min read
IEEE Computer Society (IEEE-CS) released its annual tech future predictions, earlier this week, unveiling the top ten most likely to be adopted technology trends in 2019. "The Computer Society's predictions are based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019," mentions Hironori Kasahara, IEEE Computer Society President. Let’s have a look at their top 10 technology trends predicted to reach wide adoption in 2019. Top ten trends for 2019 Deep learning accelerators According to IEEE computer society, 2019 will see widescale adoption of companies designing their own deep learning accelerators such as GPUs, FPGAs, and TPUs, which can be used in data centers. The development of these accelerators would further allow machine learning to be used in different IoT devices and appliances. Assisted transportation Another trend predicted for 2019 is the adoption of assisted transportation which is already paving the way for fully autonomous vehicles. Although the future of fully autonomous vehicles is not entirely here, the self-driving tech saw a booming year in 2018. For instance, AWS introduced DeepRacer, a self-driving race car, Tesla is building its own AI hardware for self-driving cars, Alphabet’s Waymo will be launching the world’s first commercial self-driving cars in upcoming months, and so on. Other than self-driving, assisted transportation is also highly dependent on deep learning accelerators for video recognition. The Internet of Bodies (IoB) As per the IEEE computer society, consumers have become very comfortable with self-monitoring using external devices like fitness trackers and smart glasses. With digital pills now entering the mainstream medicine, the body-attached, implantable, and embedded IoB devices provide richer data that enable development of unique applications. However, IEEE mentions that this tech also brings along with it the concerns related to security, privacy, physical harm, and abuse. Social credit algorithms Facial recognition tech was in the spotlight in 2018. For instance, Microsoft President- Brad Smith requested governments to regulate the evolution of facial recognition technology this month, Google patented a new facial recognition system that uses your social network to identify you, and so on.  According to the IEEE, social credit algorithms will now see a rise in adoption in 2019. Social credit algorithms make use of facial recognition and other advanced biometrics that help identify a person and retrieve data about them from digital platforms. This helps them check the approval or denial of access to consumer products and services. Advanced (smart) materials and devices IEEE computer society predicts that in 2019, advanced materials and devices for sensors, actuators, and wireless communications will see widespread adoption. These materials include tunable glass, smart paper, and ingestible transmitters, will lead to the development of applications in healthcare, packaging, and other appliances.   “These technologies will also advance pervasive, ubiquitous, and immersive computing, such as the recent announcement of a cellular phone with a foldable screen. The use of such technologies will have a large impact on the way we perceive IoT devices and will lead to new usage models”, mentions the IEEE computer society. Active security protection From data breaches ( Facebook, Google, Quora, Cathay Pacific, etc) to cyber attacks, 2018 saw many security-related incidents. 2019 will now see a new generation of security mechanisms that use an active approach to fight against these security-related accidents. These would involve hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms that can help identify sophisticated attacks. Virtual reality (VR) and augmented reality (AR) Packt’s 2018 Skill Up report highlighted what game developers feel about the VR world. A whopping 86% of respondents replied with ‘Yes, VR is here to stay’. IEEE Computer Society echoes that thought as it believes that VR and AR technologies will see even greater widescale adoption and will prove to be very useful for education, engineering, and other fields in 2019. IEEE believes that now that there are advertisements for VR headsets that appear during prime-time television programs, VR/AR will see widescale adoption in 2019. Chatbots 2019 will also see an expansion in the development of chatbot applications. Chatbots are used quite frequently for basic customer service on social networking hubs. They’re also used in operating systems as intelligent virtual assistants. Chatbots will also find its applications in interaction with cognitively impaired children for therapeutic support. “We have recently witnessed the use of chatbots as personal assistants capable of machine-to-machine communications as well. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human”, mentions IEEE.   Automated voice spam (robocall) prevention IEEE predicts that the automated voice spam prevention technology will see widespread adoption in 2019. It will be able to block a spoofed caller ID and in turn enable “questionable calls” where the computer will ask questions to the caller for determining if the caller is legitimate. Technology for humanity (specifically machine learning) IEEE predicts an increase in the adoption rate of tech for humanity. Advances in IoT and edge computing are the leading factors driving the adoption of this technology. Other events such as fires and bridge collapses are further creating the urgency to adopt these monitoring technologies in forests and smart roads. "The technical community depends on the Computer Society as the source of technology IP, trends, and information. IEEE-CS predictions represent our commitment to keeping our community prepared for the technological landscape of the future,” says the IEEE Computer Society. For more information, check out the official IEEE Computer Society announcement. Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 4938

article-image-real-time-motion-planning-for-robots-made-faster-and-efficient-with-rapidplan-processor
Melisha Dsouza
19 Dec 2018
4 min read
Save for later

Real-time motion planning for robots made faster and efficient with RapidPlan processor

Melisha Dsouza
19 Dec 2018
4 min read
Yesterday, Realtime Robotics announced in a guest post in the IEEE spectrum that they have developed a new processor called ‘RapidPlan’ which tackles the bottleneck faced in a robot’s motion planning. Motion planning determines how to move a robot, or autonomous vehicle, from its current position to a desired goal configuration. Although the concept sounds simple, it is far from the same. Not only does the robot have to reach the goal state, but it also has to avoid any obstacles while doing so. According to a study, this process of collision detection- determining which edges in the roadmap (i.e., motions) cannot be used because they will result in a collision-  consumed 99 percent of a motion planner’s computing time. Traditionally, motion planning has been implemented in software running on high-performance commodity hardware. The software implementation, however, causes a multiple second delay, making it impossible to deploy robots in dynamic environments or environments with humans. Such robots can only be used in controlled environments with just a few degrees of freedom. The post suggests that motion planning can be sped up using more hardware resources and software optimizations. However, the vast computational resources of GPUs and sophisticated software maximize performance and consume a large amount of power. They cannot compute more than a few plans per second. Changes in a robot’s task or scenario often require re-tuning the software. How does RapidPlan work? A robot moving from one configuration to another configuration sweeps a volume in 3D space. Collision detection determines if that swept volume collides with any obstacle (or with the robot itself). The surfaces of the swept volume and the obstacles are represented with meshes of polygons. Collision detection comprises of computations to determine whether these polygons intersect. The challenge is titime-consumings each test to determine if two polygons intersect involves cross products, dot products, division, and other computations, and there can be millions of polygons intersection tests to perform. RapidPlan overcomes the above mentioned bottleneck and achieves general-purpose, real-time motion planning, to achieve sub-millisecond motion plans. These processors convert the computational geometry task into a much faster lookup task. At the design time itself, the processors can precompute data that records what part of 3D space these motions collide with, for a large number of motions between configurations. This precomputation— which is offline and based on simulating the motions to determine their swept volumes— is loaded onto the processor to be accessed at runtime. At runtime, the processor receives sensory input that describes what part of the robot’s environment is occupied with obstacles. The processor then uses its precomputed data to eliminate the motions that would collide with these obstacles. Realtime Robotics RapidPlan processor This processor, was developed as part of a research project at Duke University, where researchers found a way to speed up motion planning by three orders of magnitude using one-twentieth the power. Their processor checks for all potential collisions across the robot’s entire range of motion with unprecedented efficiency. RapidPlan, is retargetable, updatable on-the-fly, and has the capacity for tens of millions of edges. Inheriting many of the design principles of the original processor at Duke, the processor has a reconfigurable and more scalable design for the hardware for computing whether a roadmap edge’s motion collides with an obstacle. It has the capacity for extremely large roadmaps and can partition that capacity into several smaller roadmaps in order to switch between them at runtime with negligible delay. Additional roadmaps can also be transferred from off-processor memory on-the-fly allowing the user to, for example, have different roadmaps that correspond to different states of the end effector or for different task types. Robots with fast reaction times can operate safely in an environment with humans. A robot that can plan quickly can be deployed in relatively unstructured factories and adjust to imprecise object locations and orientations. Industries like logistics, manufacturing, health care, agriculture, domestic assistants and the autonomous vehicle industry can benefit from this processor. You can head over to IEEE Spectrum for more insights on this news. MIPS open sourced under ‘MIPS Open Program’, makes the semiconductor space and SoC, ones to watch for in 2019 Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 3095
Banner background image

article-image-ai-chipmaking-startup-graphcore-raises-200m-from-bmw-microsoft-bosch-dell
Melisha Dsouza
18 Dec 2018
2 min read
Save for later

AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell

Melisha Dsouza
18 Dec 2018
2 min read
Today, Graphcore, a UK-based chipmaking startup has raised $200m in a series D funding round from investors including Microsoft and BMW, valuing the company at $1.7bn. This new funding brings the total capital raised by Graphcore to date to more than $300m. The funding round was led by U.K.venture capital firm Atomico and Sofina, with participation from the biggest names in the AI and machine learning industry like Merian Global Investors, BMW iVentures, Microsoft, Amadeus Capital Partners, Robert Bosch Venture Capital, Dell Technologies Capital, amongst many others. The company intends to use the funds generated to execute on its product roadmap, accelerate scaling and expand its global presence. Graphcore, which designs chips purpose-built for artificial intelligence, is attempting to create a new class of chips that are better able to deal with the huge amounts of data needed to make AI computers. The company is ramping up production to meet customer demand for its Intelligence Processor Unit (UPU) PCIe processor cards, the first to be designed specifically for machine intelligence training and inference. Mr. Nigel Toon, CEO, and co-founder, Graphcore said that Graphcore’s processing units can be used for both the training and deployment of machine learning systems, and they were “much more efficient”. Tobias Jahn, principal at BMW i Ventures stated that Graphcore’s technology "is well-suited for a wide variety of applications from intelligent voice assistants to self-driving vehicles.” Last year the company raised $50 million from investors including Demis Hassabis, co-founder of DeepMind; Zoubin Ghahramani of Cambridge University and chief scientist at Uber, Pieter Abbeel from UC Berkeley, and Greg Brockman, Scott Grey and Ilya Sutskever, from OpenAI. Head over to Graphcore’s official blog for more insights on this news. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled
Read more
  • 0
  • 0
  • 2502

article-image-intel-unveils-the-first-3d-logic-chip-packaging-technology-foveros-powering-its-new-10nm-chips-sunny-cove
Savia Lobo
13 Dec 2018
3 min read
Save for later

Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’

Savia Lobo
13 Dec 2018
3 min read
Yesterday, the chip manufacturing giant unleashed Foveros, its news 3-D packaging technology, which makes it easy to stack logic chips over one another. Intel claims users can see the first products to use Foveros in the second half of next year. Talking about the stacking logic, Raja Koduri, Intel’s chief architect, said, “You can pack more transistors in a given space. And also you can pack different kinds of transistors; if you want to put a 5G radio right on top of a CPU, solving the stacking problem would be great, because you have all of your functionality but also a small form factor.” With the Foveros technology, Intel will allow for smaller "chiplets," which describes fast logic chips sitting atop a base die that handles power, I/O and power delivery. This project will also help Intel overcome one of its biggest challenges, i.e, building full chips at 10nm scale. The Forveros backed product will be a 10 nanometer compute element on a base die, typically used in low-power devices. Source: Intel  Sunny Cove: Intel’s codename for the new 10nm chips Sunny Cove will be at the heart of Intel’s next-generation Core and Xeon processors which would be available in the latter half of next year. According to Intel, Sunny Cove will provide users with an improved latency and will allow for more operations to be executed in parallel (thus acting more like a GPU). On the graphics front, Intel’s also got new Gen11 integrated graphics “designed to break the 1 TFLOPS barrier,” which will be part of these Sunny Cove chips. Intel also promises improved speeds in AI related tasks, cryptography, and machine learning among other new features with the CPUs. According to a detailed report by Ars Technica, “Sunny Cove makes the first major change to x64 virtual memory support since AMD introduced its x86-64 64-bit extension to x86 in 2003. Bits 0 through 47 are used, with the top 16 bits, 48 through 63, all copies of bit 47. This limits virtual address space to 256TB. These systems can also support a maximum of 256TB of physical memory.” Starting from the second half of next year, everything from mobile devices to data centers may feature Foveros processors over time. “The company wouldn't say where, exactly, the first Foveros-equipped chip will end up, but it sounds like it'll be ideal for incredibly thin and light machines”, Engadget reports. To know more about this news in detail, visit Intel Newsroom. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features! How the Titan M chip will improve Android security  
Read more
  • 0
  • 0
  • 3440

article-image-librepcb-0-1-0-released-with-major-changes-in-library-editor-and-file-format
Amrata Joshi
03 Dec 2018
2 min read
Save for later

LibrePCB 0.1.0 released with major changes in library editor and file format

Amrata Joshi
03 Dec 2018
2 min read
Last week, the team at LibrePCB released LibrePCB 0.1.0., a free EDA (Electronic Design Automation) software used for developing printed circuit boards. Just three weeks ago, LibrePCB 0.1.0 RC2 was released with major changes in library manager, control panel, library editor, schematic editor and more. The key features of LibrePCB include, cross-platform (Unix/Linux, Mac OS X, Windows), all-in-one (project management, library/schematic/board editors) and intuitive, modern and easy-to-use graphical user interface. It also features powerful library designs and human-readable file formats. What’s new in LibrePCB 0.1.0 ? Library editor This new version saves library URL. LibrePCB 0.1.0 has come with improvements to saving of component property, schematic-only. File format stability Since this new release of LibrePCB  is a stable one, the file format is stable. The projects created with this version will be loadable with LibrePCB’s future releases. Users are comparing LibrePCB 0.1.0 with KiCad, a free open source EDA software for OSX, Linux and Windows, and they have questions as to which one is better. But many users think that LibrePcb 0.1.0 is better because the part libraries are managed well. Whereas, KiCad doesn’t have a coherent workflow for managing the part libraries. It is difficult to manage the parts like a schematic symbol, its footprint, its 3D model in KiCad. Read more about this news, in detail, on the LibrePCB blog. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly How to secure your Raspberry Pi board [Tutorial] Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”
Read more
  • 0
  • 0
  • 2412
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-freertos-adds-a-new-bluetooth-low-energy-support-feature
Natasha Mathur
27 Nov 2018
2 min read
Save for later

Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature

Natasha Mathur
27 Nov 2018
2 min read
Amazon team announced a newly added ‘bluetooth low energy support’ (BLE) feature to its  Amazon FreeRTOS. Amazon FreeRTOS is an open source, free to download and use IoT operating system for microcontrollers makes it easy for you to program, deploy, secure, connect, and manage small, low powered devices. It extends the FreeRTOS kernel (a popular open source operating system for microcontrollers) using software libraries that make it easy for you to connect your small, low-power devices to AWS cloud services or to more powerful devices that run AWS IoT Greengrass, a software that helps extend the cloud capabilities to local devices. Amazon FreeRTOS With the helo of Amazon FreeRTOS, you can collect data from them for IoT applications. Earlier, it was only possible to configure devices to a local network using common connection options such as Wi-Fi, and Ethernet. But, now with the addition of the new BLE feature, you can securely build a connection between Amazon FreeRTOS devices that use BLE  to AWS IoT via Android and iOS devices. BLE support in Amazon FreeRTOS is currently available in beta. Amazon FreeRTOS is widely used in industrial applications, B2B solutions, or consumer products companies like the appliance, wearable technology, or smart lighting manufacturers. For more information, check out the official Amazon freeRTOS update post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 4002

article-image-aws-reinvent-2018-amazon-announces-a-variety-of-aws-iot-releases
Prasad Ramesh
27 Nov 2018
4 min read
Save for later

AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases

Prasad Ramesh
27 Nov 2018
4 min read
At the AWS re:Invent 2018 event yesterday, Amazon announced a variety of IoT related AWS releases. Three new AWS IoT Service Delivery Designations at AWS re:Invent 2018 The AWS Service Delivery Program helps customers find and select top APN Partners who have a track record of delivering specific AWS services. The APN partners undergo a service delivery expertise related technical validation in order to get an AWS Service Delivery designation. Three new AWS IoT Service Delivery Designations are now added—AWS IoT Core, AWS IoT Greengrass, and AWS IoT Analytics. AWS IoT Things Graph AWS IoT Things Graph provides an easy way for developers to connect different devices and web services in order to build IoT applications. Devices and web services are represented as reusable components called models. These models hide the low-level details and expose states, actions, and events of underlying devices and services as APIs. A drag-and-drop interface is available to connect the models visually and define interactions between them. This can build multi-step automation applications. When built, the application to your AWS IoT Greengrass-enabled device can be deployed with a few clicks. Areas which it can be used are home automation, industrial automation, and energy management. AWS IoT Greengrass has extended functionality AWS IoT Greengrass allows bringing abilities like local compute, messaging, data caching, sync, and ML inference to edge devices. New features that extend the capabilities of AWS IoT Greengrass including can be used now: Connectors to third-party applications and AWS services. Hardware root of trust private key storage. Isolation and permission configurations that increase the AWS IoT Greengrass Core configuration options. The connectors allow you to easily build complex workflows on AWS IoT Greengrass even if you have no understanding of device protocols, managing credentials, or interacting with external APIs. Connections can be made without writing code. The security is increased due to hardware root of trust private key storage on hardware secure elements. This includes Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). By storing your private key on a hardware secure element, a hardware root of trust level security is added to existing AWS IoT Greengrass security features that include X.509 certificates. This enables mutual TLS authentication and encryption of data regardless if they are transit or at rest. The hardware secure element can be used to protect secrets that were deployed to AWS IoT Greengrass device. New configuration options allow deploying AWS IoT Greengrass to another container environment and directly access low-power devices like Bluetooth Low Energy (BLE) devices. AWS IoT SiteWise, available in limited preview AWS IoT SiteWise is a new service that simplifies collecting and organizing data from industrial equipment at scale. With this service, you can easily monitor equipment across industrial facilities to identify waste, production inefficiencies, and defects in products. With IoT SiteWise, industrial data is stored securely, is available, and searchable in the cloud. IoT SiteWise can be integrated with industrial equipment via a gateway. The gateway then securely connects on-premises data servers to collect data and send it to the AWS Cloud. AWS IoT SiteWise can be used in areas of manufacturing, food and beverage, energy, and utilities. AWS IoT Events, available in preview AWS IoT Events is a new IoT service that makes it easy to catch and respond to events from IoT sensors and applications. This service recognizes events across multiple sensors in order to identify operational issues like equipment slowdowns. It triggers alerts to notify support teams of an issue. This service offers a managed complex event detection service on the AWS cloud. Detecting events across thousands of IoT sensors, like temperature, humidity is simple. System-wide event detection and responding with appropriate actions is easy and cost-effective with AWS IoT Events. Potential areas of use include manufacturing, oil, and gas, commercial and consumer products. Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Data science announcements at Amazon re:invent 2017
Read more
  • 0
  • 0
  • 2993

article-image-introducing-strato-pi-an-industrial-raspberry-pi
Prasad Ramesh
26 Nov 2018
4 min read
Save for later

Introducing Strato Pi: An industrial Raspberry Pi

Prasad Ramesh
26 Nov 2018
4 min read
Italian companies have designed Strato Pi, a Raspberry Pi based board intended to be used in industrial applications. It can be used in areas where a higher level of reliability is required. Source: sferlabs website Strato Pi features The board is roughly the same size of Regular Raspberry Pi 2/3 and is engineered to work in an industrial environment that demands more rugged devices. Power supply that can handle harsh environments The Strato Pi can accept a power supply from a wide range and can handle substantial amounts of ripple, noise and voltage fluctuations. The power supply circuit is heavily protected and filtered with oversized electrolytic capacitors, diodes, inductors, and a high efficiency voltage regulator. The power converter is based on PWN converted integrated circuits which can provide up to 95% power efficiency and up to 3A continuous current output. Over current limiting, over voltage protection and thermal shutdown are also built-in. The board is also protected against reverse polarity with resettable fuses. There is surge protection up to ±500V/2ohms 1.2/50μs which ensures reliability even in harsh environments. UPS to safeguard against power failure In database and data collection applications, supper power interruption may cause data loss. To tackle this Strato Pi has an integrated power supply that gives enough time to save data and shutdown when there is a power failure. The battery power supply stage of the board supplies power to the Strato Pi circuits without any interruption even when the main power supply fails. This stage also charges the battery via a high efficiency step-up converter to generate the optimal charging voltage independent of the main power supply voltage value. Built-in real time clock The Strato Pi has a built-in battery-backed real time clock/calendar. It is directly connected to the Raspberry Pi via the I2C bus interface. This shows the correct time even when there is no internet connection. This real time clock is based on the MCP79410 general purpose Microchip RTCC chip. A replaceable CR1025 battery acts as backup power source when the main power is not available. In always powered on state, the battery can last over 10 years. Serial Port Strato Pi uses the interface circuits of the RS-232 and RS-485 serial ports. They are insulated from the main and battery power supply voltages which avoids failures due to ground loops. A proprietary algorithm powered micro-controller, automatically manages the data direction of RS-485. Without any special configuration, the baud rate and the number of bits are taken into account. Thus, the Raspberry board can communicate through its TX/RX lines without any other additional signal. Can Bus The Controller Area Network (CAN) bus is widely used and is based on a multi-master architecture. This board implements an easy to use CAN bus controller. It has both RS-485 and CAN bus ports which can be used at the same time. CAN specification version 2.0B can be used and support of up to 1 Mbps is available. A hardware watchdog A hardware watchdog is an electronic circuit that can automatically reset the processor if there is a software hang. This is implemented with the help of the on board microcontroller. This is independent of  the Raspberry Pi’s internal CPU watchdog circuit. The base variant starts at roughly $88. They also have a mini and products like a prebuilt server. For more details on Strato Pi, sferlabs website. Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi [Tutorial]
Read more
  • 0
  • 0
  • 5046

article-image-what-if-buildings-of-the-future-could-compute-european-researchers-make-a-proposal
Prasad Ramesh
23 Nov 2018
3 min read
Save for later

What if buildings of the future could compute? European researchers make a proposal.

Prasad Ramesh
23 Nov 2018
3 min read
European researchers have proposed an idea for buildings that could compute. In the paper On buildings that compute. A proposal published this week, they have made proposals to integrate computation in various parts of a building, from cement and bricks to paint. What is the idea about? Smart homes today are made up of several individual smart appliances. They may work individually or be interconnected via a central hub. “What if intelligent matter of our surrounding could understand us humans?” The idea is that the walls of a building in addition to supporting the roof, had more functionality like sensing, calculating, communicating, and even producing power. Each brick/block could be thought of as a decentralized computing entity. These blocks could contribute to a large-scale parallel computation. This would transform a smart building into an intelligent computing unit in which people can live in and interact with. Such smart buildings that compute, as the researchers say can potentially offer protection from crime, natural disasters, structural damage within the building, or simply send a greeting to the residing people. When nanotechnology meets embedded computing The proposal involves using nanotechnology to embed computation and sensing directly to the construction materials. This includes intelligent concrete blocks and using stimuli-responsive smart paint. The photo sensitive paint would sense the internal and external environment. A nano-material infused concrete composition would sense the building environment to implement parallel information processing on a large scale. This will result in distributed decision making. The result is a building which can be seen as a huge parallel computer consisting of computing concrete blocks. The key concepts used for the idea of smart buildings that compute are functional nanoparticles which are photo-, chemo- and electro-sensitive. A range of electrical properties will span all the electronic elements mixed in a concrete. The concrete is used to make the building blocks which are equipped with processors. These processors gather information from distributed sensory elements, helps in decision making, location communication and enables advanced computing. The blocks together form a wall which forms a huge parallel array processor. They envision a single building or a small colony to turn into a large-scale universal computing unit.  This is an interesting idea, bizarre even. But the practicality of it is blurry. Can its applications justify the cost involved to create such a building? There is also a question of sustainability. How long will the building last before it has to be redeveloped? I for one think that doing so will almost certainly undo the computational aspect from it. For more details, read the research paper. Home Assistant: an open source Python home automation hub to rule all things smart The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration
Read more
  • 0
  • 0
  • 3570
article-image-apex-ai-announced-apex-os-and-apex-autonomy-for-building-failure-free-autonomous-vehicles
Sugandha Lahoti
20 Nov 2018
2 min read
Save for later

Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles

Sugandha Lahoti
20 Nov 2018
2 min read
Last week, Alphabet’s Waymo announced that they will launch the world’s first commercial self-driving cars next month. Just two days after that, Apex.AI. announced their autonomous mobility systems. This announcement came soon after they closed a $15.5MM Series A funding, led by Canaan with participation from Lightspeed. Basically, Apex. AI designed a modular software stack for building autonomous systems. It easily integrates into existing systems as well as 3rd party software. An interesting thing they claim about their system is the fact that “The software is not designed for peak performance — it’s designed to never fail. We’ve built redundancies into the system design to ensures that single failures don’t lead to system-wide failures.” Their two products are Apex.OS and Apex.Autonomy. Apex.OS Apex.OS is a meta-operating system, which is an automotive version of ROS (Robot Operating System). It allows software developers to write safe and secure applications based on ROS 2 APIs. Apex.OS is built with safety in mind. It is being certified according to the automotive functional safety standard ISO 26262 as a Safety Element out of Context (SEooC) up to ASIL D. It ensures system security through HSM support, process level security, encryption, authentication. Apex.OS improves production code quality through the elimination of all unsafe code constructs. It ships with support for automotive hardware, i.e. ECUs and automotive sensors. Moreover it comes with a complete documentation including examples, tutorials, design articles, and 24/7 customer support. Apex.Autonomy Apex.Autonomy provides developers with building blocks for autonomy. It has well-defined interfaces for easy integration with any existing autonomy stack. It is written in C++, is easy to use, and can be run and tested on Linux, Linux RT, QNX, Windows, OSX. It is designed with production and ISO 26262 certification in mind and is CPU bound on x86_64 and amd64 architectures. A variety of LiDAR sensors are already integrated and tested. Read more about the products on Apex. AI website. Alphabet’s Waymo to launch the world’s first commercial self driving cars next month. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race. Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles.
Read more
  • 0
  • 0
  • 3889

article-image-raspberry-pi-launches-it-last-board-for-the-foreseeable-future-the-raspberry-pi-3-model-a-available-now-at-25
Prasad Ramesh
16 Nov 2018
2 min read
Save for later

Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25

Prasad Ramesh
16 Nov 2018
2 min read
Yesterday, Raspberry launched the Raspberry Pi 3 Model A+ board which is a smaller and cheaper version of the Raspberry Pi 3B+. In 2014, the first gen Raspberry Pi 1 Model B+ was followed by a lighter Model A+ with half the RAM and removed ports. This was able to fit into their Hardware Attached on Top (HAT). Until now there were no such small form factor boards for the Raspberry Pi 2 and 3. Size is cut down but not the features (most of) The Raspberry Pi 3 Model A+ retains most of the features and enhancements as the bigger board of this series. This includes a 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU, 512MB LPDDR2 SDRAM, and dual-band 802.11ac wireless LAN and Bluetooth 4.2/BLE. The enhancements retained are improved USB mass-storage booting and improved thermal management. The entire Raspberry Pi 3 Model A+ board is an FCC certified radio module. This will significantly reduce the cost in conformance testing Raspberry Pi–based products. What is shrunk is the price which is now down to $25 and the board size of 65x56mm, the size of a HAT. Source: Raspberry website Raspberry Pi 3 Model A+ will likely be the last product for now In March this year, Raspberry said that the 3+ platform is the final iteration of the “classic” Raspberry Pi boards. The next steps/released products will be out of necessity and not an evolution. This is because for an evolution to happen Raspberry will need a new core silicon, on a new process node, with new memory technology. So this new board, the 3A+ is about closing things; meaning we won’t see any more products in this line, in the foreseeable future. This board does answer one of their most frequent customer requests for ‘missing products’. And clears their pipeline to focus on building the next generation of Raspberry Pi boards. For more details visit the Raspberry Pi website. Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 8497

article-image-googles-pixel-camera-app-introduces-night-sight-to-help-click-clear-pictures-with-hdr
Amrata Joshi
15 Nov 2018
3 min read
Save for later

Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+

Amrata Joshi
15 Nov 2018
3 min read
Yesterday, Pixel camera app launched a new feature, Night Sight to help in clicking sharp, clean photographs in very low light. It works on both, the main and selfie cameras of all three generations of Pixel phones. Also, it does not require an additional tripod or flash. How HDR+ helps Night Sight [caption id="attachment_24169" align="aligncenter" width="696"]  Image source: Google AI Blog[/caption] Night Sight features works because of HDR+. HDR+ uses computational photography for producing clearer photographs. When the shutter button is pressed, HDR+ captures a rapid burst of pictures, then quickly combines them into one. It improves results in both high dynamic range and low-light situations. It reduces the impact of read noise and shot noise thereby improving SNR (Signal to Noise Ratio) in dim lighting. To keep the photographs sharp even if the hand shakes or the subject moves, Pixel camera app uses short exposures. The pieces of frames which aren’t well aligned, get rejected. This lets HDR+ to produce sharp images even while there is excessive light. The Pixel camera app works well in both the situations, dim light or excessive light exposure. The default picture-taking mode on Pixel phones uses a zero-shutter-lag (ZSL) protocol, which limits exposure time. As soon as one opens the camera app, it starts capturing image frames and stores them in a circular buffer. This circular buffer constantly erases old frames to make room for new ones. When the shutter button is pressed, the camera sends the most recent 9 or 15 frames to the HDR+ or Super Res Zoom software. The image is captured exactly at the right moment. That’s why it is called zero-shutter-lag. No matter how dim the scene is, HDR+ limits exposures to at most 66ms, allowing the display rate of at least 15 frames per second. Night Sight uses positive-shutter-lag (PSL), for dimmer scenes where longer exposures are needed. The app uses motion metering to measure recent scene motions and chooses an exposure time to minimize the blur effect. How to use Night Sight? The Night Sight feature can't operate in complete darkness, there should be at least some light. Night Sight works better in uniform lighting than harsh lighting. Users can tap on various objects, then move the exposure slider, to increase exposure. If it’s very dark and the camera can’t focus, tap on the edge of a light source or on a high-contrast edge. Keep very bright light sources out of the field of view to avoid lens flare artifacts. The Night Sight feature has already created some buzz around. But the major drawback is that it can’t work in complete darkness. Also, since the learning-based white balancer is trained for Pixel 3, it will be less accurate on older phones. This app works better with the newer phone than the older ones. Read more about this news on Google AI Blog. The DEA and ICE reportedly plan to turn streetlights to covert surveillance cameras, says Quartz report Facebook is at it again. This time with Candidate Info where politicians can pitch on camera ‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research
Read more
  • 0
  • 0
  • 2258
article-image-helium-proves-to-be-less-than-an-ideal-gas-for-iphones-and-apple-watches
Prasad Ramesh
31 Oct 2018
3 min read
Save for later

Helium proves to be less than an ‘ideal gas’ for iPhones and Apple watches

Prasad Ramesh
31 Oct 2018
3 min read
‘Hey, turn off the Helium it’s bad for my iPhone’ is not something you hear every day. In an unusual event at a facility, this month, an MRI machine affected iPhones and Apple watches. In a facility, many iPhone users started to experience issues with their devices. The devices stopped working. Originally an EMP burst was suspected to shut down the devices. But it was noted that only iPhone 6 and above, Apple Watch series 0 and above were affected. The only iPhone 5 in the building and Android phones remained functional. Luckily none of the patients reported any issue. The reason found was a new MEMS oscillator used in the newer affected devices. These tiny devices are used to measure time and can work properly only in certain conditions like a vacuum or a specific gas surrounding the piece. Helium being a sneaky one atom gas, can get through the tiniest of crevices. An MRI machine was being installed and in the process, the coolant, Helium leaked. Approximately 120 liters of Helium leaked in the span of 5 hours. Helium expands hundreds of times when it turns to gas from liquid, with a boiling point of around −268 °C it did so in room temperature. You could say a large part of a building could be flooded with the gas given 120 liters. Apple does mention it in their official iPhone help guide: “Exposing iPhone to environments having high concentrations of industrial chemicals, including near evaporating liquified gasses such as helium, may damage or impair iPhone functionality.” So what if your device is affected? Apple also mentions: “If your device has been affected and shows signs of not powering on, the device can typically be recovered. Leave the unit unconnected from a charging cable and let it air out for approximately one week. The helium must fully dissipate from the device, and the device battery should fully discharge in the process. After a week, plug your device directly into a power adapter and let it charge for up to one hour. Then the device can be turned on again.” The original poster on Reddit, harritaco even performed an experiment and posted it on YouTube. Although much doesn't happen in the video of 8 minutes, he says to have repeated it for 12 minutes and the phone turned off. For more details and discussions, visit the Reddit thread. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Apple launches iPad Pro, updates MacBook Air and Mac mini Facebook and NYU are working together to make MRI scans 10x faster
Read more
  • 0
  • 0
  • 2428

article-image-apple-launches-ipad-pro-updates-macbook-air-and-mac-mini
Prasad Ramesh
31 Oct 2018
3 min read
Save for later

Apple launches iPad Pro, updates MacBook Air and Mac mini

Prasad Ramesh
31 Oct 2018
3 min read
At an event in Brooklyn, New York yesterday, Apple unveiled the new iPad Pro, the new MacBook Air, and Mac mini. iPad Pro Following the trend, the new iPad Pro sports a larger screen to body ratio with minimal bezels. Powering the new iPad is an eight-core A12X Bionic chip which is powerful enough for Photoshop CC, coming in 2019. There is a USB-C connector, Gigabit-class LTE, and up to 1TB of storage. There are two variants with 11-inch and 12.9-inch Liquid Retina displays. Source: Apple The display can go up to 120Hz for smooth scrolling but the headphone jack is removed. Battery life is stated to be 10-hour long. The dedicated Neural engine supports tasks requiring machine learning from photography to AR. Apple is calling it the ‘best device ever for AR’ due to its cameras, sensors, improved four-speaker audio combined with the power of the A12X Bionic chip. There is also a second generation Apple Pencil that magnetically attaches to iPad Pro and charges at the same time. The Smart Keyboard Folio is made for versatility. The Keyboard and Apple Pencil are sold separately. MacBook Air The new MacBook Air features a 13-inch Retina display, Touch ID, a newer i5 processor, and more portable design compared to the previous MacBook. This MacBook Air is the cheapest Macbook to sport a Retina display with a resolution of 2560×1600. There is a built-in 720p FaceTime camera. For better security, there is TouchID—a fingerprint sensor built into the keyboard and a T2 security chip. Source: Apple Each key is individually lit in the keyboard and the Touchpad area is also larger. The new MacBook Air comes with an 8th generation Intel Core i5 processor, Intel UHD Graphics, and a faster 2133 MHz RAM up to 16GB. The storage options are available up to 1.5TB. There are only two Thunderbolt 3 USB-C ports and a 3.5mm headphone jack, no other connectors. Apple says that the new MacBook Air is faster and provides a snappier experience. Mac mini The Mac got a big performance boost being five times faster than the previous one. There are options for either four- or six-core processors, with turbo boost that can go upto 4.6GHz. Both versions come with an Intel UHD Graphics 630. For memory, there is up to 64GB of 2666 MHz RAM. Source: Apple The new Mac mini also features a T2 security chip. So files stored on the SSD are automatically and fully encrypted. There are four Thunderbolt 3 ports and a 10-gigabit Ethernet port. There is HDMI 2.0 port, two USB-A ports, a 3.5mm audio jack. The storage options are available up to 2TB. Apple says that both the MacBook Air and the Mac mini are made with 100% recycled aluminum which reduces the carbon footprint of these devices by 50%. Visit the Apple website to see availability and pricing of the iPad Pro, MacBook Air, and Mac mini. ‘Think different’ makes Apple the world’s most valuable company, crossing $1 Trillion market cap Apple releases iOS 12 beta 2 with screen time and battery usage updates among others Apple and Amazon take punitive action against Bloomberg’s ‘misinformed’ hacking story
Read more
  • 0
  • 0
  • 2347