Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-timescaledb-goes-distributed-implements-chunking-over-sharding-for-scaling-out
Sugandha Lahoti
22 Aug 2019
5 min read
Save for later

TimescaleDB goes distributed; implements ‘Chunking’ over ‘Sharding’ for scaling-out

Sugandha Lahoti
22 Aug 2019
5 min read
TimeScaleDB announced yesterday that they are going distributed; this version is currently in private beta with the public version slated for later this year. They are also bringing PostgreSQL back. However, with PostgreSQL, a major problem is scaling out. To address this, TimeScaleDB does not implement traditional sharding, instead, using ‘chunking’. What is TimescaleDB’s chunking? In TimescaleDB, chunking is the mechanism which scales PostgreSQL for time-series workloads. Chunks are created by automatically partitioning data by multiple dimensions (one of which is time). In a blog post, TimeScaleDB specifies, “this is done in a fine-grain way such that one dataset may be comprised of 1000s of chunks, even on a single node.” Chunking offers a wide set of capabilities unlike sharding, which only offers the option to scale out. These include scaling-up (on the same node) and scaling-out (across multiple nodes). It also offers elasticity, partitioning flexibility,  data retention policies, data tiering, and data reordering. TimescaleDB also automatically partitions a table across multiple chunks on the same instance, whether on the same or different disks. TimescaleDB’s multi-dimensional chunking auto-creates chunks, keeps recent data chunks in memory, and provides time-oriented data lifecycle management (e.g., for data retention, reordering, or tiering policies). However, one issue is the management of the number of chunks (i.e., “sub-problems”). For this, they have come up with hypertable abstraction to make partitioned tables easy to use and manage. Hypertable abstraction makes chunking manageable Hypertables are typically used to handle a large amount of data by breaking it up into chunks, allowing operations to execute efficiently. However, when the number of chunks is large, these data chunks can be distributed over several machines by using distributed hypertables. Distributed hypertables are similar to normal hypertables, but they add an additional layer of hypertable partitioning by distributing chunks across data nodes. They are designed for multi-dimensional chunking with a large number of chunks (from 100s to 10,000s), offering more flexibility in how chunks are distributed across a cluster. Users are able to interact with distributed hypertables similar to a regular hypertable (which itself looks just like a regular Postgres table). Chunking does not put additional burden on applications and developers because  TimescaleDB does not interact directly with chunks (and thus do not need to be aware of this partition mapping themselves, unlike in some sharded systems). The system also does not expose different capabilities for chunks than the entire hypertable. TimescaleDB goes distributed TimescaleDB is already available for testing in private beta as for selected users and customers. The initial licensed version is expected to be widely available. This version will support features such as high write rates, query parallelism, predicate push down for lower latency, elastically growing a cluster to scale storage and compute, and fault tolerance via physical replica. Developers were quite intrigued by the new chunking process. A number of questions were asked on Hacker News, duly answered by TimescaleDB creators. One of the questions put forth is related to the Hot partition problem. A user asks, “The biggest limit is that their "chunking" of data by time-slices may lead directly to the hot partition problem -- in their case, a "hot chunk." Most time series is 'dull time' -- uninteresting time samples of normal stuff. Then, out of nowhere, some 'interesting' stuff happens. It'll all be in that one chunk, which will get hammered during reads.” To which Erik Nordström, Timescale engineer replied, “ TimescaleDB supports multi-dimensional partitioning, so a specific "hot" time interval is actually typically split across many chunks, and thus server instances. We are also working on native chunk replication, which allows serving copies of the same chunk out of different server instances. Apart from these things to mitigate the hot partition problem, it's usually a good thing to be able to serve the same data to many requests using a warm cache compared to having many random reads that thrashes the cache.” Another question asked said, “In this vision, would this cluster of servers be reserved exclusively for time series data or do you imagine it containing other ordinary tables as well?” To which, Mike Freedman, CTO of TimeScale answered, “We commonly see hypertables (time-series tables) deployed alongside relational tables, often because there exists a relation between them: the relational metadata provides information about the user, sensor, server, security instrument that is referenced by id/name in the hypertable. So joins between these time-series and relational tables are often common, and together these serve the applications one often builds on top of your data. Now, TimescaleDB can be installed on a PG server that is also handling tables that have nothing to do with its workload, in which case one does get performance interference between the two workloads. We generally wouldn't recommend this for more production deployments, but the decision here is always a tradeoff between resource isolation and cost.” Some thought sharding remains the better choice even if chunking improves performance. https://twitter.com/methu/status/1164381453800525824 Read the official announcement for more information. You can also view the documentation. TimescaleDB 1.0 officially released Introducing TimescaleDB 1.0 RC, the first OS time-series database with full SQL support Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization
Read more
  • 0
  • 0
  • 6552

article-image-cerebras-systems-unveil-wafer-scale-engine-an-ai-chip-with-1-2-trillion-transistors-and-56-times-larger-than-largest-nvidia-gpu
Savia Lobo
21 Aug 2019
5 min read
Save for later

Cerebras Systems unveil Wafer Scale Engine, an AI chip with 1.2 trillion transistors and 56 times larger than largest Nvidia GPU

Savia Lobo
21 Aug 2019
5 min read
A California-based AI startup, Cerebras Systems has unveiled the largest semiconductor chip ever built named as the ‘Wafer Scale Engine’ built to quickly train deep learning models. The Cerebras Wafer Scale Engine (WSE) is 46,225 millimeters square, contains more than 1.2 trillion transistors. It is “more than 56X larger than the largest graphics processing unit, containing 3,000X more on-chip memory and more than 10,000X the memory bandwidth,” the whitepaper reads. Most of the chips available today include a collection of chips built on top of a 12-inch silicon wafer and are processed in a chip factory in a batch.  However, the WSE chip is interconnected on a single wafer. “The interconnections are designed to keep it all functioning at high speeds so the trillion transistors all work together as one,” Venture Beats reports. Andrew Feldman, co-founder and CEO of Cerebras system said, “Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size — such as cross-reticle connectivity, yield, power delivery, and packaging.” He further adds, “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.” According to Wired, “Cerebras’ chip covers more than 56 times the area of Nvidia’s most powerful server GPU, claimed at launch in 2017 to be the most complex chip ever. Cerebras founder and CEO Andrew Feldman says the giant processor can do the work of a cluster of hundreds of GPUs, depending on the task at hand, while consuming much less energy and space.” Source: Twitter In the whitepaper, Feldman explains, for maximum performance, the entire model should fit in the fastest memory, which is the memory closest to the computation cores. This is not the case in CPUs, TPUs, and GPUs, where main memory is not integrated with compute. Instead, the vast majority of memory is based off-chip, far away on separate DRAM chips or a stack of these chips in a high bandwidth memory (HBM) device. As a result, the main memory is excruciatingly slow. The dawn of AI brought in an added consumption of higher processing power which gave rise to the demand of GPUs. However, even if a machine is filled with dozens of Nvidia’s graphics chips or GPUs, “it can take weeks to “train” a neural network, the process of tuning the code so that it finds a solution to a given problem,” according to Fortune. Linley Gwennap, a chip observer who publishes a distinguished chip newsletter, Microprocessor Report told Fortune that bundling together multiple GPUs in a computer starts to show diminishing returns once more than eight of the chips are combined. Feldman further adds “The hard part is moving data.” While training a neural network, thousands of operations happen in parallel. Also, chips must constantly share data as they crunch those parallel operations. However, computers with multiple chips may face performance issues while trying to pass data back and forth between the chips over the slower wires that link them on a circuit board. The solution Feldman said was to “take the biggest wafer you can find and cut the biggest chip out of it that you can.” Per Fortune, “the chip won’t be sold on its own but will be packaged into a computer “appliance” that Cerebras has designed. One reason is the need for a complex system of water-cooling, a kind of irrigation network to counteract the extreme heat generated by a chip running at 15 kilowatts of power.” “The wafers were produced in partnership with Taiwan Semiconductor Manufacturing, the world’s largest chip manufacturer, but Cerebras has exclusive rights to the intellectual property that makes the process possible.” J.K. Wang, TSMC’s senior vice president of operations said, “We are very pleased with the result of our collaboration with Cerebras Systems in manufacturing the Cerebras Wafer Scale Engine, an industry milestone for wafer-scale development.” “TSMC’s manufacturing excellence and rigorous attention to quality enable us to meet the stringent defect density requirements to support the unprecedented die size of Cerebras’ innovative design.” The whitepaper explains that 400,000 cores on Cerebras WSE are connected via a Swarm communication fabric in a 2D mesh with 100 Petabits per second of bandwidth. Swarm provides a hardware routing engine to each of the compute cores and connects them with short wires optimized for latency and bandwidth. Feldman said that “a handful” of customers are trying the chip, including on drug design problems. He plans to sell complete servers built around the chip, rather than chips on their own  but declined to discuss price or availability. Many find this announcement interesting given the number of transistors in work on the wafer engine. A few are skeptical if this chip will live up to the expectation. A user on Reddit commented, “I think this is fascinating. If things go well with node scaling and on-chip non-volatile memory, by mid 2030 we could be approaching human brain densities on a single ‘chip’ without even going 3D. It's incredible.” A user on HackerNews writes, “In their whitepaper, they claim "with all model parameters in on-chip memory, all of the time," yet that entire 15 kW monster has only 18 GB of memory. Given the memory vs compute numbers that you see in Nvidia cards, this seems strangely low.” https://twitter.com/jwangARK/status/1163928272134168581 https://twitter.com/jwangARK/status/1163928655145426945 To know more about Cerebras WSE chip in detail, read the complete whitepaper. Why DeepMind AlphaGo Zero is a game changer for AI research Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 4658

article-image-after-postgresql-digitalocean-now-adds-mysql-and-redis-to-its-managed-databases-offering
Savia Lobo
20 Aug 2019
2 min read
Save for later

After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering

Savia Lobo
20 Aug 2019
2 min read
Today, DigitalOcean, the cloud for developing modern apps, announced that it has introduced Managed Databases for MySQL and Redis, the popular open-source relational and in-memory databases, respectively. These offerings eliminate the complexity involved in managing, scaling and securing database infrastructure, and instead allow developers to focus on building apps. DigitalOcean’s Managed Databases was launched in February--with PostgreSQL as its first offering service--and allows developers to create fully-managed database instances in the cloud. Managed Databases provides features such as worry-free setup and maintenance, free daily backups with point-in-time recovery, standby nodes with automated failovers, end-to-end security, and scalable performance. These new offerings build upon the existing support for PostgreSQL, providing worry-free maintenance for three of the most popular database engines. DigitalOcean’s Senior Vice President of Product Shiven Ramji said, “With the additions of MySQL and Redis, DigitalOcean now supports three of the most requested database offerings, making it easier for developers to build and run applications, rather than spending time on complex management.”  “The developer is not just the DNA of DigitalOcean, but the reason for much of the company’s success. We must continue to build on this success and support developers with the services they need most on their journey towards simple app development,” he further added. DigitalOcean selected MySQL and Redis as the next offerings for its Managed Databases service due to overwhelming demand from its customer base and the developer community at large. DigitalOcean’s Managed Databases offerings for MySQL and Redis are available in New York, Frankfurt and San Francisco data center regions, with support for additional regions being added over the next few weeks. To know more about this news in detail, head over to Digital Ocean’s official website. Digital Ocean announces ‘Managed Databases for PostgreSQL’ DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Limited Availability of DigitalOcean Kubernetes announced!
Read more
  • 0
  • 0
  • 3324

article-image-cloudflare-plans-to-go-public-files-s-1-with-the-sec
Savia Lobo
19 Aug 2019
3 min read
Save for later

Cloudflare plans to go public; files S-1 with the SEC

Savia Lobo
19 Aug 2019
3 min read
Cloudflare announced its plans to go public and has filed an S-1 with the SEC (Securities and Exchange Commission) last week. This action taken by Cloudflare comes after it received a hoard of ‘negative publicity’ to the use of the network by the 8chan online forum, which is known to have inspired the mass shootings in El Paso, Texas, and ChristChurch, New Zealand. “We are aware of some potential customers that have indicated their decision to not subscribe to our products was impacted, at least in part, by the actions of certain of our paying and free customers,” the filing says. Post the El Paso mass shooting incident, a few days back, Cloudflare first continued to defend hosting 8chan calling it their ‘moral obligation’ to provide 8chan their services. However, after an intense public and media backlash, Cloudflare reversed their stance and announced that it would completely stop providing support for 8chan. To this, Jim Watkins, the owner of 8chan, said in a video statement, “It is clearly a political move to remove 8chan from CloudFlare; it has dispersed a peacefully assembled group of people.” Cloudflare said they avoid cutting off websites for objectionable content as it can also “harm our brand and reputation”; however, it banned the Neo-Nazi website, Daily Stormer in 2017 after the website claimed that Cloudflare was protecting them and secretly agreed with the site's neo-Nazi articles. “We received significant adverse feedback for these decisions from those concerned about our ability to pass judgment on our customers and the users of our platform, or to censor them by limiting their access to our products, and we are aware of potential customers who decided not to subscribe to our products because of this,” says the filing. Cloudflare also plans to list shares on the New York Stock Exchange under the ticker symbol "NET," the filing mentioned. It has also raised just over $400 million from investors including Franklin Templeton Investments, Fidelity Investments, Microsoft and Baidu, Forbes states. “Activities of our paying and free customers or the content of their websites or other Internet properties, as well as our response to those activities, could cause us to experience significant adverse political, business, and reputational consequences with customers, employees, suppliers, government entities, and others,” the company said in the filing. According to Forbes, “The filing reveals that Prince owns 16.6% of the company, which (after factoring in a private company discount) is worth about $270 million based on the 2015 valuation. Zatlyn(co-founder) owns 5.6% of the company, worth about $90 million. Holloway(co-founder) owns a 3.2% stake. Cloudflare has not yet indicated the price range for selling its shares.” Earlier this year, Fastly, another cloud provider also went public. “After pricing its IPO at $16 per share, Fastly’s equity skated higher in early trading. Today  Fastly is worth $23.19 per share, up about 45 percent,” Crunchbase reported in July. To know more about this news in detail, head over to the S-1 filing report. Cloudflare RCA: Major outage was a lot more than “a regular expression went bad” Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule After refusing to sign the Christchurch Call to fight online extremism, Trump admin launches tool to defend “free speech” on social media platforms
Read more
  • 0
  • 0
  • 4017

article-image-stripes-negative-emissions-commitment-to-directly-remove-co2-and-store-its-sequestration-to-mitigate-global-warming
Vincy Davis
16 Aug 2019
4 min read
Save for later

Stripe's ‘Negative Emissions Commitment’ to pay for removal and sequestration of CO2 to mitigate global warming

Vincy Davis
16 Aug 2019
4 min read
Yesterday, Stripe, the online payments platform provider, announced a phenomenal initiative of ‘Negative Emissions Commitment’. According to the commitment, Stripe will pay for the removal and sequestration of carbon dioxide directly from the atmosphere in a secure and long-term storage to mitigate or delay global warming. https://twitter.com/patrickc/status/1162120064302059520 [box type="shadow" align="" class="" width=""]Carbon sequestration is the process of capturing and storing atmospheric carbon dioxide or other forms of carbon for a long term in a secure storage.[/box] Besides Stripe, there are other growing startups such as Carbon Engineering, Climeworks, and Global Thermostat, which are actively working in this space. Stripe seeks to purchase negative carbon dioxide (CO2) emissions at any price per tCO2 (total Carbon dioxide). The official blog post adds, “And so we commit to spending at least twice as much on sequestration as we do on offsets, with a floor of at least $1M per year.” This initiative of Stripe comes after IPCC in its recent summary report stated that in scenarios where the temperature usually stays below 2°C of temperature increase will have “substantial net negative emissions by 2100, on average around 2 gigatons of CO2 per year.” Image Source: IPCC Stripe plans to work with experts in selecting successful carbon capture solutions based on cost-effectiveness as it is expected that it will cost more than $100 per tCO2, as compared to the $8 per tCO2 that the company pays for offsets. What are Stripe’s current efforts in technology landscape? There are three ongoing projects that the software company expects funding for, First, is the land management project, which aims to improve natural carbon sinks by forestation initiatives, soil management reform, and agricultural techniques. Scientists and entrepreneurs can try to increase the duration of CO2 storage by hacking plant roots such that more CO2 can be stored for an extended period of time. The second ongoing project is on enhanced weathering. The project will make CO2 in a gas or liquid to react with silicate minerals and rocks rich in Ca and Mg to form carbonate minerals. This collected carbon is later sequestered for centuries in the mineral. Next, is a direct-air capture project, which is an industrial installation that uses energy to force air into contact with a CO2-sorbent. Later, the CO2 is separated from the sorbent and transported to long-term storage sites. Stripe believes that humanity will need more such techniques in the coming decades in order to achieve the collective goal of removing negative emissions from the atmosphere. The company expects that if a scalable and verifiable negative emissions technology is made available for $100 per tonne of captured CO2 (tCO2) in the market, it could turn out to be a trillion-dollar industry by the end of the century. Such kind of projects will not only help in reducing negative emissions but will also put an end to anthropogenic climate change. Stripe has also announced that they are open to funding such projects to mitigate negative emissions in the coming decade. People all over the world are admiring Stripe’s commitment in mitigating negative emissions. https://twitter.com/onwards2020/status/1162141027139932160 https://twitter.com/noeltoolan/status/1162160662086242307 https://twitter.com/RayWalshe/status/1162122719678390274 Many are also expecting that other companies will follow Stripe in this initiative. https://twitter.com/Ruh_abhi/status/1162181740158279685 https://twitter.com/hukl/status/1162285694385041408 For more details about Stripe’s Negative Emissions Commitment, head over to Stripe’s official blog. Stripe’s API degradation RCA found unforeseen interaction of database bugs and a config change led to cascading failure across critical services Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times Stripe updates its product stack to prepare European businesses for SCA-compliance
Read more
  • 0
  • 0
  • 4531

article-image-nvidias-latest-breakthroughs-in-conversational-ai-trains-bert-in-under-an-hour-launches-project-megatron-to-train-transformer-based-models-at-scale
Bhagyashree R
14 Aug 2019
4 min read
Save for later

NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale

Bhagyashree R
14 Aug 2019
4 min read
Researchers have been constantly putting their efforts into improving conversational AI to make them better understand human languages and their nuances. One such advancement in the conversational AI field is the introduction of Transformer-based models such as OpenAI’s GPT-2 and Google’s BERT. In a quest to make the training and deployment of these vastly large language models efficient, NVIDIA researchers recently conducted a study, the details of which they shared yesterday. https://twitter.com/ctnzr/status/1161277599793860618 NVIDIA’s Tensor core GPU took less than an hour to train the BERT model BERT, short for, Bidirectional Encoder Representations from Transformers, was introduced by a team of researchers at Google Language AI. It is capable of performing a wide variety of state-of-the-art NLP tasks including Q&A, sentiment analysis, and sentence classification. What makes BERT different from other language models is that it applies the bidirectional training of Transformer to language modelling. Transformer is an attention mechanism that learns contextual relations between words in a text. It is designed to pre-train deep bidirectional representations from the unlabeled text by using both left and right context in all layers. NVIDIA researchers chose BERT-LARGE, a version of BERT created with 340 million parameters for the study. NVIDIA’s DGX SuperPOD was able to train the model in a record-breaking time of 53 minutes. The Super POD was made up of 92 DGX-2H nodes and 1472 GPUs, which were running PyTorch with Automatic Mixed Precision. The following table shows the time taken to train BERT-Large for various numbers of GPUs: Source: NVIDIA Looking at these results, the team concluded, “The combination of GPUs with plenty of computing power and high-bandwidth access to lots of DRAM, and fast interconnect technologies, makes the NVIDIA data center platform optimal for dramatically accelerating complex networks like BERT.” In a conversation with reporters and analysts, Bryan Catarazano, Vice President of Applied Deep Learning Research at NVIDIA said, “Without this kind of technology, it can take weeks to train one of these large language models.” NVIDIA further said that it has achieved the fastest BERT inference time of 2.2 milliseconds by running it on a Tesla T4 GPU and TensorRT 5.1 optimized for datacenter inference. NVIDIA launches Project Megatron, under which it will research training transformer language models at scale Beginning this year, OpenAI introduced the 1.5 billion parameter GPT-2 language model that generates nearly coherent and meaningful texts. The NVIDIA Research team has built a scaled-up version of this model, called GPT-2 8B. As its name suggests, it is made up of 8.3 billion parameters, which makes it 24X the size of BERT-Large. To train this huge model the team used PyTorch with 8-way model parallelism and 64-way data parallelism on 512 GPUs. This experiment was part of a bigger project called Project Megatron, under which the team is trying to create a platform that facilitates the training of such “enormous billion-plus Transformer-based networks.” Here’s a graph showing the compute performance and scaling efficiency achieved: Source: NVIDIA With the increase in the number of parameters, there was also a noticeable improvement in accuracy as compared to smaller models. The model was able to achieve a wikitext perplexity of 17.41, which surpasses previous results on the wikitext test dataset by Transformer-XL. However, it does start to overfit after about six epochs of training that can be mitigated by using even larger scale problems and datasets. NVIDIA has open-sourced the code for reproducing the single-node training performance in its BERT GitHub repository. The NLP code on Project Megatron is also openly available in Megatron Language Model GitHub repository. To know more in detail, check out the official announcement by NVIDIA. Also, check out the following YouTube video: https://www.youtube.com/watch?v=Wxi_fbQxCM0 Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks ACLU (American Civil Liberties Union) file a complaint against the border control officers for violating the constitutional rights of an Apple employee
Read more
  • 0
  • 0
  • 5419
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-verizon-sells-tumblr-to-wordpress-parent-automattic-for-allegedly-less-than-3million-a-fraction-of-its-acquisition-cost
Vincy Davis
13 Aug 2019
4 min read
Save for later

Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost

Vincy Davis
13 Aug 2019
4 min read
Yesterday, Tumblr Staff announced to its users that Automattic, the company that owns WordPress.com, plans to acquire Tumblr. Though the official post does not mention any details, it has been reported that Verizon sold Tumblr for less than $3 million. Automattic will also absorb 200 of Verizon’s employees, however other details of the deal remain undisclosed. The official blog post states, “We couldn’t be more excited to be joining a team that has a similar mission. Many of you know WordPress.com, Automattic’s flagship product. WordPress.com and Tumblr were both early pioneers among blogging platforms.” https://twitter.com/jeffdonof/status/1161034494465519620 Launched in 2007, Tumblr is a microblogging and social networking website which allows users to upload and share photos, music, art and post short blogs. It hosts more than 450 million blogs and was earlier considered as one of the major players among social media platforms. In 2013, Yahoo acquired Tumblr, for $1.1 billion when the company was one of the leading social media platforms. Due to poor returns from Tumblr, Yahoo downgraded its value to $230 million and in 2017, Verizon undertook it as part of its Yahoo acquisition. In December 2018, Verizon announced its new policy to ban all adult content on Tumblr. This new policy came days after Tumblr was removed from Apple’s iOS App Store over a child pornography incident. The new policy made many users infuriated, leading to further decline in its user count. Two months ago, it was reported that Verizon was keen to sell Tumblr, in order to compensate for its unattainable revenue targets. Automattic acquiring the company is seen as a good sign by many, as WordPress.com is one of the most popular open source blogging platforms. Although, Tumblr has suffered from inconsistent ownerships all along, it does have a loyal user base. Automattic’s Chief Executive Officer, Matt Mullenweg believes that the new ownership and investment will make Tumblr blossom. “I was very impressed with the engagement and activity Tumblr has continued to have,” he said on Hacker News. In an interview with Wall Street Journal, Mullenweg says this is the biggest acquisition for the company in terms of price and headcount and mentions that Tumblr will act as as a “complementary” site to WordPress. Although Automattic enables adult content on its own platform, Mullenweg has said that Automattic will continue with Verizon’s policy of no adult content on Tumblr, “Adult content is not our forte either, and it creates a huge number of potential issues with app stores, payment providers, trust and safety.” https://twitter.com/photomatt/status/1161049101741494273 Many users are annoyed with Automattic’s decision to not support adult content on Tumblr. https://twitter.com/countchrisdo/status/1161136251631734784 Another user tweeted that Twitter and Reddit both allow adult content, so Automattic should show some care for the people affected by the ban. He added, “No one wants the NSFW ban to stay, but I guess you're fine with it as long as it lines your pockets.” Another user says, “I'm curious why you would choose to maintain Verizon's policy changes that alienated the majority of the user-base.” Many users are, however, happy that Tumblr has finally found a stable host in Automattic. https://twitter.com/marcoarment/status/1161015149563645953 https://twitter.com/fraying/status/1161020130966437888 https://twitter.com/onalark/status/1161020459980222464 Some feel that Tumblr is a dead company and Automattics’ $3 million is down the drain. https://twitter.com/shiruken/status/1161058926936449025 https://twitter.com/1amnerd/status/1161208412752863233 A user on Hacker News comments, “Surprised by this news. Tumblr has lost a ton of momentum since its policy change, and the site itself doesn't have a very strong "brand" audience attached to it.” Tumblr open sources its Kubernetes tools for better workflow integration How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Verizon hosted Ericsson 2018 OSS/BSS User Group with a ‘Quest For Easy’ theme
Read more
  • 0
  • 0
  • 2450

article-image-pytorch-1-2-is-here-with-a-new-torchscript-api-expanded-onnx-export-and-more
Bhagyashree R
12 Aug 2019
3 min read
Save for later

PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more

Bhagyashree R
12 Aug 2019
3 min read
Last week, the PyTorch team announced the release of PyTorch 1.2. This version comes with a new TorchScript API with improved Python language coverage, expanded ONNX export, a standard nn.Transformer module, and more. https://twitter.com/PyTorch/status/1159552940257923072 Here are some of the updates in PyTorch 1.2: A new TorchScript API TorchScript enables you to create models that are serializable and optimizable with PyTorch code. PyTorch 1.2 brings a new “easier-to-use TorchScript API” for converting nn.Modules into ScriptModules. The torch.jit.script will now recursively compile functions, methods, and classes that it encounters. The preferred way to create ScriptModules is torch.jit.script(nn_module_instance) instead of inheriting from torch.jit.ScriptModule. With this update, some of the items will be considered deprecated and developers are recommended not to use them in their new code. Among the deprecated components are the @torch.jit.script_method decorator, classes that inherit from torch.jit.ScriptModule, the torch.jit.Attribute wrapper class, and the __constants__ array. Also, TorchScript now has improved support for Python language constructs and Python's standard library. It supports iterator-based constructs such as for..in loops, zip(), and enumerate(). It also supports the math and string libraries and other Python builtin functions. Full support for ONNX Opset export The PyTorch team has worked with Microsoft to bring full support for exporting ONNX Opset versions 7, 8, 9, 10. PyTorch 1.2 includes the ability to export dropout, slice, flip and interpolate in Opset 10. ScriptModule is improved to include support for multiple outputs, tensor factories, and tuples as inputs and outputs. Developers will also be able to register their own symbolic to export custom ops, and set the dynamic dimensions of inputs during export. A standard nn.Transformer PyTorch 1.2 comes with a standard nn.Transformer module that allows you to modify the attributes as needed. Based on the paper Attention is All You Need, this module relies entirely on an attention mechanism for drawing global dependencies between input and output. It is designed in such a way that you can use its individual components independently. For instance, you can use its nn.TransformerEncoder API without the larger nn.Transformer. Breaking changes in PyTorch 1.2 The return dtype of comparison operations including lt, le, gt, ge, eq, ne is now changed to torch.bool instead of torch.uint8. The type of torch.tensor(bool) and torch.as_tensor(bool) is changed to torch.bool dtype instead of torch.uint8. Some of the linear algebra functions are now removed in favor of the renamed operations. Here’s a table listing all the removed operations and their alternatives for your quick reference: Source: PyTorch Check out the PyTorch release notes to know more in detail. PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook open-sources PyText, a PyTorch based NLP modeling framework  
Read more
  • 0
  • 0
  • 6033

article-image-telegram-introduces-new-features-slow-mode-switch-custom-titles-comments-widget-and-much-more
Amrata Joshi
12 Aug 2019
3 min read
Save for later

Telegram introduces new features: Slow mode switch, custom titles, comments widget and much more!

Amrata Joshi
12 Aug 2019
3 min read
Last week, the team at Telegram, the messaging app, introduced new features for group admins and users. These features include Slow Mode switch, custom titles, features for videos, and much more. What’s new in Telegram? Admins get more authority to manage the group  Slow Mode switch The Slow Mode feature will allow the group admin to control how often a member could send a message in the group. Once the admin enables Slow Mode in a group, the users will be able to send one message per the interval they choose. Also, a timer will be shown to the users which would tell them how long they need to wait before sending their next message. This feature is introduced to make group conversations more orderly and also to raise the value of each individual message. The official post suggests admins to “Keep it (Slow Mode feature) on permanently, or toggle as necessary to throttle rush hour traffic.” Image Source: Telegram Custom titles Group owners will now be able to set custom titles for admins like ‘Meme Queen’, ‘Spam Hammer’ or ‘El Duderino’. These custom titles will be shown with the default admin labels. For adding a custom title, users need to edit admin's rights in Group Settings. Image Source: Telegram Silent messages Telegram has now planned to bring more peace of mind to its users by introducing a feature that allows its users to message friends without any sound. Users just have to hold the send button to have any message or media delivered. New feature for videos Videos shared on Telegram now show thumbnail previews as users scroll through the videos to help them find the moment they were looking for. If users add a timestamp like 0:45 to a video caption, it will be automatically highlighted as a link. Also, if a user taps on a timestamp the video will play from the right spot.  Comments widget The team has come up with a new tool called Comments.App for users to comment on channel posts. With the help of the comments widget, users can log in with just two taps and comment with text and photos, as well as like, dislike and further reply to comments from others. Few users are excited about this news and appreciate Telegram over Whatsapp because it provides by default end to end encryption. A user commented on HackerNews, “I really like Telegram. Only end-to-end encryption by default and in group chats would make it perfect.” To know more about this news, check out the official post by Telegram. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests Hacker destroys Iranian cyber-espionage data; leaks source code of APT34’s hacking tools on Telegram Trick or a treat: Telegram announces its new ‘delete feature’ that deletes messages on both the ends
Read more
  • 0
  • 0
  • 7673

article-image-uber-goes-on-a-hiring-freeze-in-its-engineering-teams-after-a-painful-second-quarter-operating-loss-of-5-4-billion
Sugandha Lahoti
12 Aug 2019
3 min read
Save for later

Uber goes on a hiring freeze in its engineering teams after a painful second-quarter operating loss of $5.4 billion

Sugandha Lahoti
12 Aug 2019
3 min read
Uber has stopped recruiting new candidates for its engineering teams in U.S and Canada after reporting the largest-ever quarterly loss of $5.4 billion in its second-quarter earnings call. The loss is attributed to heavy competition and IPO expenses. The second quarter 2019 results were released Thursday, last week. Out of this $5.4 billion Uber paid out almost $4 billion in stock-based compensation as one time charges related to Uber's IPO, that inflated their loss number. This leaves almost $1.2 billion this quarter which they burned on operations out of which 50% was from Uber Eats subsidies. The investor report also highlights an increase in bookings (up 31%), active users (up 30%), trips (up 35%), and revenue (up 14%). In July, the Uber platform reached over 100 million Monthly Active Platform Consumers. Their core business, ridesharing has improved its gross margin and unit economics quarter-over-quarter. Uber also froze hiring for the position of software engineers and product managers across the US and Canada citing Hiring goal exceeding. According to Yahoo, who first reported the news, Uber has canceled scheduled on-site interviews for tech roles. Job applicants were informed that the positions are being put on hold due to a hiring freeze in engineering teams in the U.S. and Canada.  In emails sent to job interviewees, Uber recruiters explained “there have been some changes” and the opportunity has been “put on hold for now,” according to emails reviewed by Yahoo Finance. Hiring remains unaffected for workers in Uber’s freight or autonomous vehicles businesses. Uber also laid off 400 employees in its marketing department, earlier this month. The cut off was 1/3rd of the 1200-employee Uber marketing team which followed after Uber’s IPO and first-quarter investor report with losses of $1 billion. The reorganized marketing team will be under the leadership of Mike Strickman. Many of Uber’s teams are “too big, which creates overlapping work, makes for unclear decision owners, and can lead to mediocre results,” CEO Dara Khosrowshahi wrote in an email sent to employees and shared with TechCrunch. “As a company, we can do more to keep the bar high, and expect more of ourselves and each other,” Khosrowshahi said the restructuring aims to put the marketing team, and the company, back on track. The move suggests Uber is getting quite cautious about headcount to ensure their strategic priorities. https://twitter.com/RonOpti/status/1159982955487383552 In May, Uber drivers went on a two-hour strike in several major cities around the world coinciding with Uber’s IPO. Labor groups organizing the strike protested the companies’ poor payment and labor practices. Uber and Lyft drivers go on strike a day before Uber IPO roll-out Uber introduces Base Web, an open source “unified” design system for building websites in React. Uber open-sources Peloton, a unified Resource Scheduler
Read more
  • 0
  • 0
  • 1310
article-image-facebook-must-face-privacy-class-action-lawsuit-loses-facial-recognition-appeal-u-s-court-rules
Fatema Patrawala
09 Aug 2019
3 min read
Save for later

Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules

Fatema Patrawala
09 Aug 2019
3 min read
The 9th Circuit U.S. Court of Appeals issued its ruling on Thursday that Facebook users in Illinois can sue the company over face recognition technology. The Court rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users. The class action lawsuit has been working its way through the courts since four years, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service. The case is Patel et al v Facebook Inc, 9th U.S. Circuit Court of Appeals, No. 19-15982. Now, thanks to a unanimous decision from the Circuit Court, the lawsuit can proceed. The statement from the circuit came as per this, “We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.” According to the American Civil Liberties Union (ACLU), it's the first decision by a U.S. appellate court to directly address privacy concerns posed by facial recognition technology. "This decision is a strong recognition of the dangers of unfettered use of face surveillance technology," Nathan Freed Wessler, an attorney with the ACLU Speech, Privacy and Technology Project, said in a statement. "The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale." “This biometric data is so sensitive that if it is compromised, there is simply no recourse,” Shawn Williams, a lawyer for plaintiffs in the class action, said to Reuters. “It’s not like a Social Security card or credit card number where you can change the number. You can’t change your face.” Facebook has been currently facing broad criticism from lawmakers and regulators over its privacy practices. Last month, Facebook agreed to pay a record $5 billion fine to settle a Federal Trade Commission data privacy probe. Facebook said it plans to appeal. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” according to Reuters report. Illinois users accused Facebook for violating the Biometric Information Privacy Act Reuters report that in 2015 the lawsuit began when Illinois users accused Facebook of violating that state’s Biometric Information Privacy Act in collecting biometric data. Facebook allegedly accomplished this through its “Tag Suggestions” feature, which allowed users to recognize their Facebook friends from previously uploaded photos. Writing for the appeals court, Circuit Judge Sandra Ikuta said the Illinois users could sue as a group, rejecting Facebook’s argument that their claims were unique and required individual lawsuits. She also said the 2008 Illinois law was intended to protect individuals’ “concrete interests in privacy,” and Facebook’s alleged unauthorized use of a face template “invades an individual’s private affairs and concrete interests.” The court returned the case to U.S. District Judge James Donato in San Francisco, who had certified a class action in April 2018, for a possible trial.Illinois’ biometric privacy law provides for damages of $1,000 for each negligent violation and $5,000 for each intentional or reckless violation. Williams, a partner at Robbins Geller Rudman & Dowd, said the class could include 7 million Facebook users. Facebook fails to fend off a lawsuit over data breach of nearly 30 million users Facebook fails to block ECJ data security case from proceeding Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content
Read more
  • 0
  • 0
  • 3387

article-image-stockx-confirms-a-data-breach-impacting-6-8-million-customers
Sugandha Lahoti
09 Aug 2019
3 min read
Save for later

StockX confirms a data breach impacting 6.8 million customers

Sugandha Lahoti
09 Aug 2019
3 min read
StockX, an online marketplace for buying and selling sneakers, suffered a major data breach in May impacting 6.8 million customers. Records leaked included names, email addresses and hashed passwords. The full scale of this data breach came to light after an unnamed data breached seller contacted TechCrunch claiming information about the attack. Tech crunch then verified the claims by contacting people from a sample of 1,000 records using the information only they would know. StockX released a statement yesterday acknowledging that a data breach had indeed occurred. StockX says they were made aware of the breach on July 26 and immediately launched a forensic investigation and engaged experienced third-party data experts to assist. On getting evidence to suggest customer data may have been accessed by an unknown third party, they sent customers an email on August 3 to make them aware of the incident. This email surprisingly asked customers to reset their passwords citing system updates but said nothing about the data breach leaving users confused on what caused the alleged system update or why there was no prior warning. Later the same day, StockX confirmed that they had discovered a data security issue and confirmed that an unknown third-party was able to gain access to certain customer data, including customer name, email address, shipping address, username, hashed passwords, and purchase history. The hashes were encrypted using MD5 with salts. According to weleakinfo, this is a very weak hashing algorithm; at least 90% of all hashes can be cracked successfully. Users were infuriated that instead of being honest, StockX simply sent their customers an email asking them to reset their passwords. https://twitter.com/Asaud_7/status/1157843000170561536 https://twitter.com/kustoo/status/1157735133157314561 https://twitter.com/RunWithChappy/status/1157851839754383360 StockX released a system-wide security update, a full password reset of all customer passwords with an email to customers alerting them about resetting their passwords, a high-frequency credential rotation on all servers and devices and a lockdown of their cloud computing perimeter. However, they were a little too late in their ‘ongoing investigation’ as they mention on their blog. Techcrunch revealed that the seller had put the data for sale for $300 in a dark web listing and one person had already bought the data. StockX is also subject to EU’s General Data Protection Regulation considering it has a global customer base and can be potentially fined for the incident. https://twitter.com/ComplexSneakers/status/1157754866460221442 According to FTC, StockX is also not compliant with the US laws regarding a data breach. https://twitter.com/zruss/status/1157785830200619008 Following Capital One data breach, GitHub gets sued and AWS security questioned by a US Senator. British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches.
Read more
  • 0
  • 0
  • 4435

article-image-deepcode-the-ai-startup-for-code-review-raises-4m-seed-funding-will-be-free-for-educational-use-and-enterprise-teams-with-30-developers
Vincy Davis
06 Aug 2019
3 min read
Save for later

DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers

Vincy Davis
06 Aug 2019
3 min read
Today, Deepcode, the tool that uses artificial intelligence (AI) to help developers write better code, raised $4M in seed funding to expand it’s machine learning systems for code reviews. Deepcode plans to expand its supported list of languages (by including C#, PHP, and C/C++), improve the scope of code recommendations, and also grow the team internationally. It has also been revealed that Deepcode is working on its first integrated developer environment (IDE) project. The funding round was conducted by Earlybed, and the participants were 3VC and Btov Partners, DeepCode’s existing investor. DeepCode has also announced a new pricing structure. Previously, it was only free for open source software development projects. Today, it announced that it will also be free for educational purposes and for enterprise teams with 30 developers. https://twitter.com/DeepCodeAI/status/1158666106690838528 Launched in 2016, DeepCode reviews bugs, alerts about critical vulnerabilities, and style violations in the earlier stages of software development. Currently, DeepCode supports Java, JavaScript, and Python languages. When a developer links their Github or Bitbucket accounts to DeepCode, the DeepCode bot processes millions of commits in the available open source software projects and highlights broken codes that can cause compatibility issues. In a statement to Venturebeat, Paskalev says that DeepCode saves 50% of developers time, spent on finding bugs. Read Also: Thanks to DeepCode, AI can help you write cleaner code Earlybird co-founder and partner, Christian Nagel says, “DeepCode provides a platform that enhances the development capabilities of programmers. The team has a deep scientific understanding of code optimization and uses artificial intelligence to deliver the next breakthrough in software development.” Many open source projects have been getting major investments from tech companies lately. Last year, the software giant Microsoft acquired the open source code platform giant GitHub for $7.5 billion. Another popular platform for distributed version control and source code management GitLab also raised a $100 million Series D funding. With the software industry growing, the amount of codes written has increased to a great extent thus requiring more testing and debugging. DeepCode receiving funds is definitely good news for the developer community. https://twitter.com/andreas_herzog/status/1158666757588115456 https://twitter.com/evanderburg/status/1158710341963935745 Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin
Read more
  • 0
  • 0
  • 4781
article-image-blazingsql-a-gpu-accelerated-sql-engine-built-on-top-of-rapids-is-now-open-source
Bhagyashree R
06 Aug 2019
4 min read
Save for later

BlazingSQL, a GPU-accelerated SQL engine built on top of RAPIDS, is now open source

Bhagyashree R
06 Aug 2019
4 min read
Yesterday, the BlazingSQL team open-sourced BlazingSQL under the Apache 2.0 license. It is a lightweight, GPU-accelerated SQL engine built on top of the RAPIDS.ai ecosystem. RAPIDS.ai is a suite of software libraries and APIs for end-to-end execution of data science and analytics pipelines entirely on GPUs. Explaining his vision behind this step, Rodrigo Aramburu, CEO of BlazingSQL wrote in a Medium blog post, “As RAPIDS adoption continues to explode, open-sourcing BlazingSQL accelerates our development cycle, gets our product in the hands of more users, and aligns our licensing and messaging with the greater RAPIDS.ai ecosystem.” Aramburu calls RAPIDS “the next-generation analytics ecosystem” where BlazingSQL serves as the SQL standard. It also serves as an SQL interface for cuDF, a GPU DataFrame (GDF) library for loading, joining, aggregating, and filtering data. Here’s an overview of how BlazingSQL fits into the RAPIDS.ai ecosystem: Source: BlazingSQL Advantages of using BlazingSQL Cost-effective: Customers often have to cluster thousands of servers for processing data at scale, which can be very expensive. BlazingSQL takes up only a small fraction of the infrastructure to run at an equivalent scale. Better performance: BlazingSQL is 20x faster than Apache Spark cluster when extracting, transforming, and loading data. It generates GPU-accelerated results in seconds enabling data scientists to quickly iterate over new models. Easily scale up workload: Usually, workloads are first prototyped at small scale and then rebuilt for distributed systems. With BlazingSQL, you need to write code only once that can be dynamically changed depending on the scale of distribution with minimal code changes. Connect to multiple data sources: It connects to multiple data sources for querying files in local and distributed filesystems. Currently, it supports AWS S3 and Apache HDFS and the team plans to support more in the future. Run federated queries: It allows you to directly query raw data into GPU memory in its original format with the help of federated queries. A federated query allows you to join data from multiple data stores across multiple data formats. It currently supports CSV, Apache Parquet, JSON, and existing GPU DataFrames. GM of data science at NVIDIA, Josh Patterson said in the announcement, “NVIDIA and the RAPIDS ecosystem are delighted that BlazingSQL is open-sourcing their SQL engine built on RAPIDS. By leveraging Apache Arrow on GPUs and integrating with Dask, BlazingSQL will extend open-source functionality, and drive the next wave of interoperability in the accelerated data science ecosystem.” This news sparked a discussion on Hacker News, where Aramburu cleared any queries developers had about BlazingSQL. One developer asked why the team chose CUDA instead of an open-sourced option like OpenCL. Aramburu explained, “Early on when we first started playing around with General Processing on GPU's we had Nvidia cards to begin with and I started looking at the APIs that were available to me. The CUDA ones were easier for me to get started, had tons of learning content that Nvidia provided, and were more performant on the cards that I had at the time compared to other options. So we built up lots of expertise in this specific way of coding for GPUS. We also found time and time again that it was faster than OpenCL for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs. The second answer to this question is that blazingsql is part of a greater ecosystem. rapids.ai and the largest contributor by far is Nvidia. We are really happy to be working with their developers to grow this ecosystem and that means that the technology will probably be CUDA only unless we somehow program "backends" like they did with thrust but that would be eons away from now.” People also celebrated the news of Blazing SQL’s open-sourcing. A comment on Hacker News reads, “This is great. The BlazingDB guys are awesome and now that the project is open source this is another good reason for my teams to experiment with different workloads and compare it against a SparkSQL approach” BlazingDB announces BlazingSQL , a GPU SQL Engine for NVIDIA’s open source RAPIDS Amazon introduces PartiQL, a SQL-compatible unifying query language for multi-valued, nested, and schema-less data Amazon Aurora makes PostgreSQL Serverless generally available
Read more
  • 0
  • 0
  • 3746

article-image-apple-plans-to-suspend-siri-response-grading-process-due-to-privacy-issues
Amrata Joshi
05 Aug 2019
4 min read
Save for later

Apple plans to suspend Siri response grading process due to privacy issues

Amrata Joshi
05 Aug 2019
4 min read
Last month, the Guardian reported that Apple contractors regularly listen to confidential medical information, drug deals, and personal recordings of couples, as part of their job via Siri’s recordings. The contractors are responsible for grading Siri’s responses on a variety of factors such as checking if the activation of the voice assistant was deliberate or accidental,  if the query was something Siri was expected to help with and whether Siri’s response was appropriate. As per the report by the Guardian, one of the Apple contractors explained the grading process. In the grading process, the audio snippets are taken which are not connected to names or IDs of individuals and contractors are made to listen to them in order to check whether Siri is accurately hearing them or Siri may have been invoked by mistake. In a statement to the Guardian, Apple said, “A small portion of Siri requests are analysed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analysed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements.”  Additionally, Apple said that the data “is used to help Siri and dictation … understand you better and recognise what you say.” Siri can also accidentally get activated when it by mistakenly hears the word ‘wake’ or the phrase “Hey Siri”. The Apple contractor explained, “The sound of a zip, Siri often hears as a trigger.” This month, Apple has planned to suspend Siri’s response grading and review the process, this might be the company’s counter move against this report by the Guardian. Apple will also be issuing a software update in the future that will give Siri users a choice to choose whether they participate in the grading process or not.  In a statement to TechCrunch, Apple said, “We are committed to delivering a great Siri experience while protecting user privacy.” The company further added, “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.” Companies like Amazon and Google have also come into the radar because of involving humans for monitoring their automatic voice assistants. There were reports that stated that Amazon’s staff was listening to some of Alexa’s recordings. And there was a similar incident that happened with Google Assistant. This month, Amazon came up with an option to disable the human review of Alexa recordings. It seems users might appreciate if they are asked for their consent before their personal recordings get monitored. Also, these recordings get stored on the server and if any incident of data breach takes place or if a malicious attacker targets the server or datacenter, there is a high possibility of such data getting into the wrong hands. And this might make us think if our personal data is really secure? In a recent Threatpost Podcast on voice assistant privacy issues, Tim Mackey, principal security strategist at cybersecurity research center at Synopsys said, “The biggest concern that I have is actually around data retention policies and disclosure.”  Mackey further added, “So we have an expectation that these are connected devices, and that perhaps short of the Alexa-then-perform-action activity, that the communication, the actual processing of our request is going to occur on an Amazon server, Google server or so forth…. And what we’re learning is that the providers tend to keep this data for an indeterminate amount of time. And that’s a significant risk, because the volume of data itself means that it’s potentially very interesting to a malicious actor someplace who wishes to say, target an individual.” Apple acquires Pullstring to possibly help Apple improve Siri and other IoT-enabled gadgets Apple joins the Thread Group, signaling its Smart Home ambitions with HomeKit, Siri and other IoT products Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driveKit, and much more!        
Read more
  • 0
  • 0
  • 3554