Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - News

104 Articles
article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 3329

article-image-speech2face-a-neural-network-that-imagines-faces-from-hearing-voices-is-it-too-soon-to-worry-about-ethnic-profiling
Savia Lobo
28 May 2019
8 min read
Save for later

Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling?

Savia Lobo
28 May 2019
8 min read
Last week, a few researchers from the MIT CSAIL and Google AI published their research study of reconstructing a facial image of a person from a short audio recording of that person speaking, in their paper titled, “Speech2Face: Learning the Face Behind a Voice”. The researchers designed and trained a neural network which uses millions of natural Internet/YouTube videos of people speaking. During training, they demonstrated that the model learns voice-face correlations that allows it to produce images that capture various physical attributes of the speakers such as age, gender, and ethnicity. The entire training was done in a self-supervised manner, by utilizing the natural co-occurrence of faces and speech in Internet videos, without the need to model attributes explicitly. They said they further evaluated and numerically quantified how their Speech2Face reconstructs, obtains results directly from audio, and how it resembles the true face images of the speakers. For this, they tested their model both qualitatively and quantitatively on the AVSpeech dataset and the VoxCeleb dataset. The Speech2Face model The researchers utilized the VGG-Face model, a face recognition model pre-trained on a large-scale face dataset called DeepFace and extracted a 4096-D face feature from the penultimate layer (fc7) of the network. These face features were shown to contain enough information to reconstruct the corresponding face images while being robust to many of the aforementioned variations. The Speech2Face pipeline consists of two main components: 1) a voice encoder, which takes a complex spectrogram of speech as input, and predicts a low-dimensional face feature that would correspond to the associated face; and 2) a face decoder, which takes as input the face feature and produces an image of the face in a canonical form (frontal-facing and with neutral expression). During training, the face decoder is fixed, and only the voice encoder is trained which further predicts the face feature. How were the facial features evaluated? To quantify how well different facial attributes are being captured in Speech2Face reconstructions, the researchers tested different aspects of the model. Demographic attributes Researchers used Face++, a leading commercial service for computing facial attributes. They evaluated and compared age, gender, and ethnicity, by running the Face++ classifiers on the original images and our Speech2Face reconstructions. The Face++ classifiers return either “male” or “female” for gender, a continuous number for age, and one of the four values, “Asian”, “black”, “India”, or “white”, for ethnicity. Source: Arxiv.org Craniofacial attributes Source: Arxiv.org The researchers evaluated craniofacial measurements commonly used in the literature, for capturing ratios and distances in the face. They computed the correlation between F2F and the corresponding S2F reconstructions. Face landmarks were computed using the DEST library. As can be seen, there is statistically significant (i.e., p < 0.001) positive correlation for several measurements. In particular, the highest correlation is measured for the nasal index (0.38) and nose width (0.35), the features indicative of nose structures that may affect a speaker’s voice. Feature similarity The researchers further test how well a person can be recognized from on the face features predicted from speech. They, first directly measured the cosine distance between the predicted features and the true ones obtained from the original face image of the speaker. The table above shows the average error over 5,000 test images, for the predictions using 3s and 6s audio segments. The use of longer audio clips exhibits consistent improvement in all error metrics; this further evidences the qualitative improvement observed in the image below. They further evaluated how accurately they could retrieve the true speaker from a database of face images. To do so, they took the speech of a person to predict the feature using the Speech2Face model and query it by computing its distances to the face features of all face images in the database. Ethical considerations with Speech2Face model Researchers said that the training data used is a collection of educational videos from YouTube and that it does not represent equally the entire world population. Hence, the model may be affected by the uneven distribution of data. They have also highlighted that “ if a certain language does not appear in the training data, our reconstructions will not capture well the facial attributes that may be correlated with that language”. “In our experimental section, we mention inferred demographic categories such as “White” and “Asian”. These are categories defined and used by a commercial face attribute classifier and were only used for evaluation in this paper. Our model is not supplied with and does not make use of this information at any stage”, the paper mentions. They also warn that any further investigation or practical use of this technology would be carefully tested to ensure that the training data is representative of the intended user population. “If that is not the case, more representative data should be broadly collected”, the researchers state. Limitations of the Speech2Face model In order to test the stability of the Speech2Face reconstruction, the researchers used faces from different speech segments of the same person, taken from different parts within the same video, and from a different video. The reconstructed face images were consistent within and between the videos. They further probed the model with an Asian male example speaking the same sentence in English and Chinese to qualitatively test the effect of language and accent. While having the same reconstructed face in both cases would be ideal, the model inferred different faces based on the spoken language. In other examples, the model was able to successfully factor out the language, reconstructing a face with Asian features even though the girl was speaking in English with no apparent accent. “In general, we observed mixed behaviors and a more thorough examination is needed to determine to which extent the model relies on language. More generally, the ability to capture the latent attributes from speech, such as age, gender, and ethnicity, depends on several factors such as accent, spoken language, or voice pitch. Clearly, in some cases, these vocal attributes would not match the person’s appearance”, the researchers state in the paper. Speech2Cartoon: Converting generated image into cartoon faces The face images reconstructed from speech may also be used for generating personalized cartoons of speakers from their voices. The researchers have used Gboard, the keyboard app available on Android phones, which is also capable of analyzing a selfie image to produce a cartoon-like version of the face. Such cartoon re-rendering of the face may be useful as a visual representation of a person during a phone or a video conferencing call when the person’s identity is unknown or the person prefers not to share his/her picture. The reconstructed faces may also be used directly, to assign faces to machine-generated voices used in home devices and virtual assistants. https://twitter.com/NirantK/status/1132880233017761792 A user on HackerNews commented, “This paper is a neat idea, and the results are interesting, but not in the way I'd expected. I had hoped it would the domain of how much person-specific information this can deduce from a voice, e.g. lip aperture, overbite, size of the vocal tract, openness of the nares. This is interesting from a speech perception standpoint. Instead, it's interesting more in the domain of how much social information it can deduce from a voice. This appears to be a relatively efficient classifier for gender, race, and age, taking voice as input.” “I'm sure this isn't the first time it's been done, but it's pretty neat to see it in action, and it's a worthwhile reminder: If a neural net is this good at inferring social, racial, and gender information from audio, humans are even better. And the idea of speech as a social construct becomes even more relevant”, he further added. This recent study is interesting considering the fact that it is taking AI to another level wherein we are able to predict the face just by using audio recordings and even without the need for a DNA. However, there can be certain repercussions, especially when it comes to security. One can easily misuse such technology by impersonating someone else and can cause trouble. It would be interesting to see how this study turns out to be in the near future. To more about the Speech2Face model in detail, head over to the research paper. OpenAI introduces MuseNet: A deep neural network for generating musical compositions An unsupervised deep neural network cracks 250 million protein sequences to reveal biological structures and functions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 7005

article-image-facial-recognition-technology-is-faulty-racist-biased-abusive-to-civil-rights-act-now-to-restrict-misuse-say-experts-to-house-oversight-and-reform-committee
Vincy Davis
27 May 2019
6 min read
Save for later

‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee

Vincy Davis
27 May 2019
6 min read
Last week, the US House Oversight and Reform Committee held its first hearing on examining the use of ‘Facial Recognition Technology’. The hearing was an incisive discussion on the use of facial recognition by government and commercial entities, flaws in the technology, lack of regulation and its impact on citizen’s civil rights and liberties. The chairman of the committee, U.S. Representative of Maryland, Elijah Cummings said that one of the goals of this hearing about Facial Recognition Technology is to “protect against its abuse”. At the hearing, Joy Buolamwini, founder of Algorithmic Justice League highlighted one of the major pressing points for the failure of this technology as ‘misidentification’, that can lead to false arrests and accusations, a risk especially for marginalized communities. On one of her studies at MIT, on facial recognition systems, it was found that for the task of guessing a gender of a face, IBM, Microsoft and Amazon had error rates which rose to over 30% for darker skin and women. On evaluating benchmark datasets from organizations like NIST (National Institute for Standards and Technology), a striking imbalance was found. The dataset contained 75 percent male and 80 percent lighter skin data, which she addressed as “pale male datasets”. She added that our faces may well be the final frontier of privacy and Congress must act now to uphold American freedom and rights at minimum. Professor of Law at the University of the District of Columbia, Andrew G. Ferguson agreed with Buolamwini stating, “Congress must act now”, to prohibit facial recognition until Congress establishes clear rules. “The fourth amendment won’t save us. The Supreme Court is trying to make amendments but it’s not fast enough. Only legislation can react in real time to real time threats.” Another strong concern raised at the hearing was the use of facial recognition technology in law enforcement. Neema Singh Guliani, Senior Legislative Counsel of American Civil Liberties Union, said law enforcement across the country, including the FBI, is using face recognition in an unrestrained manner. This growing use of facial recognition is being done “without legislative approval, absent safeguards, and in most cases, in secret.” She also added that the U.S. reportedly has over 50 million surveillance cameras, this combined with face recognition threatens to create a near constant surveillance state. An important addition to regulating facial recognition technology was to include all kinds of biometric surveillance under the ambit of surveillance technology. This includes voice recognition and gait recognition, which is also being used actively by private companies like Tesla. This surveillance should not only include legislation, but also real enforcement so “when your data is misused you have actually an opportunity, to go to court and get some accountability”, Guliani added. She also urged the committee to investigate how FBI and other federal agencies are using this technology, whose accuracy has not been tested and how the agency is complying with the Constitution by “reportedly piloting Amazon's face recognition product”. Like FBI and other government agencies, even companies like Amazon and Facebook were heavily criticized by members of the committee for misusing the technology. It was notified that these companies look for ways to develop this technology and market facial recognition. On the same day of this hearing, came the news that Amazon shareholders rejected the proposal on ban of selling its facial recognition tech to governments. This year in January, activist shareholders had proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. This technology is regarded as an enabler of racial discrimination of minorities as it was found to be biased and inaccurate. Jim Jordan, U.S. Representative of Ohio, raised a concern as to how, “Some unelected person at the FBI talks to some unelected person at the state level, and they say go ahead,” without giving “any notification to individuals or elected representatives that their images will be used by the FBI.” Using face recognition in such casual manner poses “a unique threat to our civil rights and liberties”, noted Clare Garvie, Senior Associate of Georgetown University Law Center and Center on Privacy & Technology. Studies continue to show that the accuracy of face recognition varies on the race of the person being searched. This technology “makes mistakes and risks making more mistakes and more misidentifications of African Americans”. She asserted that “face recognition is too powerful, too pervasive, too susceptible to abuse, to continue being unchecked.” A general agreement by all the members was that a federal legislation is necessary, in order to prevent a confusing and potentially contradictory patchwork, of regulation of government use of facial recognition technology. Another point of discussion was how great facial recognition could work, if implemented in a ‘real’ world. It can help the surveillance and healthcare sector in a huge way, if its challenges are addressed correctly. Dr. Cedric Alexander, former President of National Organization of Black Law Enforcement Executives, was more cautious of banning the technology. He was of the opinion that this technology can be used by police in an effective way, if trained properly. Last week, San Francisco became the first U.S. city to pass an ordinance barring police and other government agencies from using facial recognition technology. This decision has attracted attention across the country and could be followed by other local governments. A council member Gomez made the committee’s stand clear that they, “are not anti-technology or anti-innovation, but we have to be very aware that we're not stumbling into the future blind.” Cummings concluded the hearing by thanking the witnesses stating, “I've been here for now 23 years, it's one of the best hearings I've seen really. You all were very thorough and very very detailed, without objection.” The second hearing is scheduled on June 4th, and will have law enforcement witnesses. For more details, head over to the full Hearing on Facial Recognition Technology by the House Oversight and Reform Committee. Read More Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?
Read more
  • 0
  • 0
  • 2579
Banner background image

article-image-postgresql-12-beta-1-released
Fatema Patrawala
24 May 2019
6 min read
Save for later

PostgreSQL 12 Beta 1 released

Fatema Patrawala
24 May 2019
6 min read
The PostgreSQL Global Development Group announced yesterday its first beta release of PostgreSQL 12. It is now also available for download. This release contains previews of all features that will be available in the final release of PostgreSQL 12, though some details of the release could also change. PostgreSQL 12 feature highlights Indexing Performance, Functionality, and Management PostgreSQL 12 will improve the overall performance of the standard B-tree indexes with improvements to the space management of these indexes as well. These improvements also provide a reduction of index size for B-tree indexes that are frequently modified, in addition to a performance gain. Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently, which lets you perform a REINDEX operation without blocking any writes to the index. This feature should help with lengthy index rebuilds that could cause downtime when managing a PostgreSQL database in a production environment. PostgreSQL 12 extends the abilities of several of the specialized indexing mechanisms. The ability to create covering indexes, i.e. the INCLUDE clause that was introduced in PostgreSQL 11, has now been added to GiST indexes. SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN) queries for data types that support the distance (<->) operation. The amount of write-ahead log (WAL) overhead generated when creating a GiST, GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which provides several benefits to the disk utilization of a PostgreSQL cluster and features such as continuous archiving and streaming replication. Inlined WITH queries (Common table expressions) Common table expressions (or WITH queries) can now be automatically inlined in a query if they: a) are not recursive b) do not have any side-effects c) are only referenced once in a later part of a query This removes an "optimization fence" that has existed since the introduction of the WITH clause in PostgreSQL 8.4 Partitioning PostgreSQL 12 while processing tables with thousands of partitions for operations, it only needs to use a small number of partitions. This release also provides improvements to the performance of both INSERT and COPY into a partitioned table. ATTACH PARTITION can now be performed without blocking concurrent queries on the partitioned table. Additionally, the ability to use foreign keys to reference partitioned tables is now permitted in PostgreSQL 12. JSON path queries per SQL/JSON specification PostgreSQL 12 now allows execution of JSON path queries per the SQL/JSON specification in the SQL:2016 standard. Similar to XPath expressions for XML, JSON path expressions let you evaluate a variety of arithmetic expressions and functions in addition to comparing values within JSON documents. A subset of these expressions can be accelerated with GIN indexes, allowing the execution of highly performant lookups across sets of JSON data. Collations PostgreSQL 12 now supports case-insensitive and accent-insensitive comparisons for ICU provided collations, also known as "nondeterministic collations". When used, these collations can provide convenience for comparisons and sorts, but can also lead to a performance penalty as a collation may need to make additional checks on a string. Most-common Value Extended Statistics CREATE STATISTICS, introduced in PostgreSQL 12 to help collect more complex statistics over multiple columns to improve query planning, now supports most-common value statistics. This leads to improved query plans for distributions that are non-uniform. Generated Columns PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet. Pluggable Table Storage Interface PostgreSQL 12 introduces the pluggable table storage interface that allows for the creation and use of different methods for table storage. New access methods can be added to a PostgreSQL cluster using the CREATE ACCESS METHOD command and subsequently added to tables with the new USING clause on CREATE TABLE. A table storage interface can be defined by creating a new table access method. In PostgreSQL 12, the storage interface that is used by default is the heap access method, which is currently is the only built-in method. Page Checksums The pg_verify_checkums command has been renamed to pg_checksums and now supports the ability to enable and disable page checksums across a PostgreSQL cluster that is offline. Previously, page checksums could only be enabled during the initialization of a cluster with initdb. Authentication & Connection Security GSSAPI now supports client-side and server-side encryption and can be specified in the pg_hba.conf file using the hostgssenc and hostnogssencrecord types. PostgreSQL 12 also allows for discovery of LDAP servers based on DNS SRV records if PostgreSQL was compiled with OpenLDAP. Few noted behavior changes in PostgreSQL 12 There are several changes introduced in PostgreSQL 12 that can affect the behavior as well as management of your ongoing operations. A few of these are noted below; for other changes, visit the "Migrating to Version 12" section of the release notes. The recovery.conf configuration file is now merged into the main postgresql.conf file. PostgreSQL will not start if it detects thatrecovery.conf is present. To put PostgreSQL into a non-primary mode, you can use the recovery.signal and the standby.signal files. You can read more about archive recovery here: https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY Just-in-Time (JIT) compilation is now enabled by default. OIDs can no longer be added to user created tables using the WITH OIDs clause. Operations on tables that have columns that were created using WITH OIDS (i.e. columns named "OID") will need to be adjusted. Running a SELECT * command on a system table will now also output the OID for the rows in the system table as well, instead of the old behavior which required the OID column to be specified explicitly. Testing for Bugs & Compatibility The stability of each PostgreSQL release greatly depends on the community, to test the upcoming version with the workloads and testing tools in order to find bugs and regressions before the general availability of PostgreSQL 12. As this is a Beta, minor changes to database behaviors, feature details, and APIs are still possible. The PostgreSQL team encourages the community to test the new features of PostgreSQL 12 in their database systems to help eliminate any bugs or other issues that may exist. A list of open issues is publicly available in the PostgreSQL wiki. You can report bugs using this form on the PostgreSQL website: Beta Schedule This is the first beta release of version 12. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2019. For further information please see the Beta Testing page. Many other new features and improvements have been added to PostgreSQL 12. Please see the Release Notes for a complete list of new and changed features. PostgreSQL 12 progress update Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial]
Read more
  • 0
  • 0
  • 3224

article-image-iclr-2019-highlights-algorithmic-fairness-ai-for-social-good-climate-change-protein-structures-gan-magic-adversarial-ml-and-much-more
Amrata Joshi
09 May 2019
7 min read
Save for later

ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more

Amrata Joshi
09 May 2019
7 min read
The ongoing ICLR 2019 (International Conference on Learning Representations) has brought a pack full of surprises and key specimens of innovation. The conference started on Monday, this week and it’s already the last day today! This article covers the highlights of ICLR 2019 and introduces you to the ongoing research carried out by experts in the field of deep learning, data science, computational biology, machine vision, speech recognition, text understanding, robotics and much more. The team behind ICLR 2019, invited papers based on Unsupervised objectives for agents, Curiosity and intrinsic motivation, Few shot reinforcement learning, Model-based planning and exploration, Representation learning for planning, Learning unsupervised goal spaces, Unsupervised skill discovery and Evaluation of unsupervised agents. https://twitter.com/alfcnz/status/1125399067490684928 ICLR 2019, sponsored by Google marks the presence of 200 researchers contributing to and learning from the academic research community by presenting papers and posters. ICLR 2019 Day 1 highlights: Neural network, Algorithmic fairness, AI for social good and much more Algorithmic fairness https://twitter.com/HanieSedghi/status/1125401294880083968 The first day of the conference started with a talk on Highlights of Recent Developments in Algorithmic Fairness by Cynthia Dwork, an American computer scientist at Harvard University. She focused on "group fairness" notions that address the relative treatment of different demographic groups. And she talked on research in the ML community that explores fairness via representations. The investigation of scoring, classifying, ranking, and auditing fairness was also discussed in this talk by Dwork. Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling https://twitter.com/NalKalchbrenner/status/1125455415553208321 Jacob Menick, a senior research engineer at Google, Deep Mind and Nal Kalchbrenner, staff research scientist and co-creator of the Google Brain Amsterdam research lab talked on Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling. They talked about the challenges involved in generating the images and how they address this issue with the help of Subscale Pixel Network (SPN). It is a conditional decoder architecture that helps in generating an image as a sequence of image slices of equal size. They also explained how Multidimensional Upscaling is used to grow an image in both size and depth via intermediate stages corresponding to distinct SPNs. There were in all 10 workshops conducted on the same day based on AI and deep learning covering topics such as, The 2nd Learning from Limited Labeled Data (LLD) Workshop: Representation Learning for Weak Supervision and Beyond Deep Reinforcement Learning Meets Structured Prediction AI for Social Good Debugging Machine Learning Models The first day also witnessed a few interesting talks on neural networks covering topics such as The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, How Powerful are Graph Neural Networks? etc. Overall the first day was quite enriching and informative. ICLR 2019 Day 2 highlights: AI in climate change, Protein structure, adversarial machine learning, CNN models and much more AI’s role in climate change https://twitter.com/natanielruizg/status/1125763990158807040 Tuesday, also the second day of the conference, started with an interesting talk on Can Machine Learning Help to Conduct a Planetary Healthcheck? by Emily Shuckburgh, a Climate scientist and deputy head of the Polar Oceans team at the British Antarctic Survey. She talked about the sophisticated numerical models of the Earth’s systems which have been developed so far based on physics, chemistry and biology. She then highlighted a set of "grand challenge" problems and discussed various ways in which Machine Learning is helping to advance our capacity to address these. Protein structure with a differentiable simulator On the second day of ICLR 2019, Chris Sander, computational biologist, John Ingraham, Adam J Riesselman, and Debora Marks from Harvard University, talked on Learning protein structure with a differentiable simulator. They about the protein folding problem and their aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. They also composed a neural energy function with a novel and efficient simulator which is based on Langevin dynamics for building an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. They also discussed certain techniques for stabilizing backpropagation and demonstrated the model's capacity to make multimodal predictions. Adversarial Machine Learning https://twitter.com/natanielruizg/status/1125859734744117249 Day 2 was long and had Ian Goodfellow, a machine learning researcher and inventor of GANs, to talk on Adversarial Machine Learning. He talked about supervised learning works and making machine learning private, getting machine learning to work for new tasks and also reducing the dependency on large amounts of labeled data. He then discussed how the adversarial techniques in machine learning are involved in the latest research frontiers. Day 2 covered poster presentation and a few talks on Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset,  Learning to Remember More with Less Memorization, Learning to Remember More with Less Memorization, etc. ICLR 2019 Day 3 highlights: GAN, Autonomous learning and much more Developmental autonomous learning: AI, Cognitive Sciences and Educational Technology https://twitter.com/drew_jaegle/status/1125522499150721025 Day 3 of ICLR 2019 started with Pierre-Yves Oudeyer’s, research director at Inria talk on Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology. He presented a research program that focuses on computational modeling of child development and learning mechanisms. He then discussed the several developmental forces that guide exploration in large real-world spaces. He also talked about the models of curiosity-driven autonomous learning that enables machines to sample and explore their own goals and learning strategies. He then explained how these models and techniques can be successfully applied in the domain of educational technologies. Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) Another interesting topic on the third day of ICLR 2019 was Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) by James Jordon from Oxford University, Jinsung Yoon from California University, and Mihaela Schaar Professor at UCLA. The experts talked about the Generative Adversarial Networks framework that helps in generating knockoffs with no assumptions on the feature distribution. They also talked about the model they created which consists of 4 networks, a generator, a discriminator, a stability network and a power network. They further demonstrated the capability of their model to perform feature selection. Followed by few more interesting topics like Deterministic Variational Inference for Robust Bayesian Neural Networks, there were series of poster presentations. ICLR 2019 Day 4 highlights: Neural networks, RNN, neuro-symbolic concepts and much more Learning natural language interfaces with neural models Today’s focus was more on neural models and neuro symbolic concepts. The day started with a talk on Learning natural language interfaces with neural models by Mirella Lapata, a computer scientist. She gave an overview of recent progress on learning natural language interfaces which allow users to interact with various devices and services using everyday language. She also addressed the structured prediction problem of mapping natural language utterances onto machine-interpretable representations. She further outlined the various challenges it poses and described a general modeling framework based on neural networks which tackle these challenges. Ordered neurons: Integrating tree structures into Recurrent Neural Networks https://twitter.com/mmjb86/status/1126272417444311041 The next interesting talk was on Ordered neurons: Integrating tree structures into Recurrent Neural Networks by Professors Yikang Shen, Aaron Courville and Shawn Tan from Montreal University, and, Alessandro Sordoni, a researcher at Microsoft. In this talk, the experts focused on how they proposed a new RNN unit: ON-LSTM, which achieves good performance on four different tasks including language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference. The last day of ICLR 2019 was exciting and helped the researchers present their innovations and attendees got a chance to interact with the experts. To have a complete overview of each of these sessions, you can head over to ICLR’s Facebook page. Paper in Two minutes: A novel method for resource efficient image classification Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop
Read more
  • 0
  • 0
  • 4391

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5668
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-jupyterhub-1-0-releases-with-named-servers-support-for-tls-encryption-and-more
Sugandha Lahoti
06 May 2019
4 min read
Save for later

JupyterHub 1.0 releases with named servers, support for TLS encryption and more

Sugandha Lahoti
06 May 2019
4 min read
JupyterHub 1.0 was released last week as the first major update since 2015. JupyterHub allows multiple users to use Jupyter notebook. JupyterHub 1.0 comes with UI support for managing named servers, and TLS encryption and authentication support, among others. What’s new in JupyterHub 1.0? UI for named servers JupyterHub 1.0 comes with full UI support for managing named servers. Named servers allow each Jupyterhub user to have access to more than one named server. JupyterHub 1.0 introduces a new UI for managing these servers. Users can now create/start/stop/delete their servers from the hub home page. Source: Jupyter blog TLS encryption and authentication JupyterHub 1.0 supports TLS encryption and authentication of all internal communication. Spawners must implement .move_certs method to make certificates available to the notebook server if it is not local to the Hub. Currently, local spawners and DockerSpawner support internal ssl. Checking and refreshing authentication JupyterHub. 1.0 introduces three new configurations to refresh or expire authentication information. c.Authenticator.auth_refresh_age allows authentication to expire after a number of seconds. c.Authenticator.refresh_pre_spawn forces a refresh of authentication prior to spawning a server, effectively requiring a user to have up-to-date authentication when they start their server. Authenticator.refresh_auth defines what it means to refresh authentication and can be customized by Authenticator implementations. Other changes A new API is added in JupyterHub 1.0 for registering user activity. Activity is now tracked by pushing it to the Hub from user servers instead of polling the proxy API. Dynamic options_form callables may now return an empty string which will result in no options form being rendered. Spawner.user_options is persisted to the database to be re-used so that a server spawned once via the form can be re-spawned via the API with the same options. c.PAMAuthenticator.pam_normalize_username, option is added for round-tripping usernames through PAM to retrieve the normalized form. c.JupyterHub.named_server_limit_per_user configuration is added to limit the number of named servers each user can have. The default is 0, for no limit. API requests to HubAuthenticated services (e.g. single-user servers) may pass a token in the Authorization header, matching authentication with the Hub API itself. Authenticator.is_admin(handler, authentication) method and Authenticator.admin_groups configuration is added for automatically determining that a member of a group should be considered an admin. These are just a select few updates. For the full list of new features and improvements in JupyterHub 1.0, visit the changelog. You can upgrade jupyterhub with conda or pip: conda install -c conda-forge jupyterhub==1.0.* pip install --upgrade jupyterhub==1.0.* Users were quite excited about the release. Here are some comments from a Hacker News thread. “This is really cool and I’m impressed by the jupyter team. My favorite part is that it’s such a good product that beats the commercial products because it’s hard to figure out, I think, commercial models that support this wide range of collaborators (people who view once a month to people who author every day).” “Congratulations! JupyterHub is a great project with high-quality code and docs. Looking forward to trying the named servers feature as I run a JupyterHub instance that spawns servers inside containers based on a single image which inevitably tends to grow as I add libraries. Being able to manage multiple servers should allow me to split the image into smaller specialized images.” Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts. 10 reasons why data scientists love Jupyter notebooks
Read more
  • 0
  • 0
  • 4909

article-image-datacamp-reckons-in-metoo-movement-ceo-steps-down-from-his-role-indefinitely
Fatema Patrawala
25 Apr 2019
7 min read
Save for later

DataCamp reckons with its #MeToo movement; CEO steps down from his role indefinitely

Fatema Patrawala
25 Apr 2019
7 min read
The data science community is reeling after data science learning startup DataCamp penned a blog post acknowledging that an unnamed company executive made "uninvited physical contact" with one of its employees. DataCamp, which operates an e-platform where aspiring data scientists can take courses in coding and data analysis is a startup valued at $184 million. It has additionally raised over $30 million in funding. The company disclosed in a blog post published on 4th April that this incident occurred at an "informal employee gathering" at a bar in October 2017. The unnamed DataCamp executive had "danced inappropriately and made uninvited physical contact" with the employee on the dance floor, the post read. The company didn't name the executive involved in the incident in its post. But called the executive's behavior on the dance floor "entirely inappropriate" and "inconsistent" with employee expectations and policies. When Buisness Insider reached out to one of the course instructors OS Keyes familiar with this matter, Keyes said that the executive in question is DataCamp's co-founder and CEO Jonathan Cornelissen. Yesterday Motherboard also reported that the company did not adequately address sexual misconduct by a senior executive there and instructors at DataCamp have begun boycotting the service and asking the company to delete their courses following allegations. What actually happened and how did DataCamp respond? On April 4, DataCamp shared a statement on its blog titled “a note to our community.” In it, the startup addressed the accusations against one of the company’s executives: “In October 2017, at an informal employee gathering at a bar after a week-long company offsite, one of DataCamp’s executives danced inappropriately and made uninvited physical contact with another employee while on the dance floor.” DataCamp got the complaint reviewed by a “third party not involved in DataCamp’s day-to-day business,” and said it took several “corrective actions,” including “extensive sensitivity training, personal coaching, and a strong warning that the company will not tolerate any such behavior in the future.” DataCamp only posted its blog a day after more than 100 DataCamp instructors signed a letter and sent it to DataCamp executives. “We are unable to cooperate with continued silence and lack of transparency on this issue,” the letter said. “The situation has not been acknowledged adequately to the data science community, leading to harmful rumors and uncertainty.” But as instructors read the statement from DataCamp following the letter, many found the actions taken to be insufficient. https://twitter.com/hugobowne/status/1120733436346605568 https://twitter.com/NickSolomon10/status/1120837738004140038 Motherboard reported this case in detail taking notes from Julia Silge, a data scientist who co-authored the letter to DataCamp. Julia says that going public with our demands for accountability was the last resort. Julia spoke about the incident in detail and says she remembered seeing the victim of the assault start working at DataCamp and then leave abruptly. This raised “red flags” but she did not reach out to her. Then Silge heard about the incident from a mutual friend and she began to raise the issue with internal people at DataCamp. “There were various responses from the rank and file. It seemed like after a few months of that there was not a lot of change, so I escalated a little bit,” she said. DataCamp finally responded to Silge by saying “I think you have misconceptions about what happened,” and they also mentioned that “there was alcohol involved” to explain the behavior of the executive. DataCamp further explained that “We also heard over and over again, ‘This has been thoroughly handled.’” But according to Silge and other instructors who have spoken out, say that DataCamp hasn’t properly handled the situation and has tried to sweep it under the rug. Silge also created a private Slack group to communicate and coordinate their efforts to confront this issue. She along with the group got into a group video conference with DataCamp, which was put into “listen-only” mode for all the other participants except DataCamp, meaning they could not speak in the meeting, and were effectively silenced. “It felt like 30 minutes of the DataCamp leadership saying what they wanted to say to us,” Silge said. “The content of it was largely them saying how much they valued diversity and inclusion, which is hard to find credible given the particular ways DataCamp has acted over the past.” Following that meeting, instructors began to boycott DataCamp more blatantly, with one instructor refusing to make necessary upgrades to her course until DataCamp addressed the situation. Silge and two other instructors eventually drafted and sent the letter, at first to the small group involved in accountability efforts, then to almost every DataCamp instructor. All told, the letter received more than 100 signatures (of about 200 total instructors). A DataCamp spokesperson said in response to this, “When we became aware of this matter, we conducted a thorough investigation and took actions we believe were necessary and appropriate. However, recent inquiries have made us aware of mischaracterizations of what occurred and we felt it necessary to make a public statement. As a matter of policy, we do not disclose details on matters like this, to protect the privacy of the individuals involved.” “We do not retaliate against employees, contractors or instructors or other members of our community, under any circumstances, for reporting concerns about behavior or conduct,” the company added. The response received from DataCamp was not only inadequate, but technologically faulty, as per one of the contractors Noam Ross who pointed out in his blog post that DataCamp had published the blog with a “no-index” tag, meaning it would not show up in aggregated searches like Google results. Thus adding this tag knowingly represents DataCamp’s continued lack of public accountability. OS Keyes said to Business Insider that at this point, the best course of action for DataCamp is a blatant change in leadership. “The investors need to get together and fire the [executive], and follow that by publicly explaining why, apologising, compensating the victim and instituting a much more rigorous set of work expectations,” Keyes said. #Rstats and other data science communities and DataCamp instructors take action One of the contractors Ines Montani expressed this by saying, “I was pretty disappointed, appalled and frustrated by DataCamp's reaction and non-action, especially as more and more details came out about how they essentially tried to sweep this under the rug for almost two years,” Due to their contracts, many instructors cannot take down their DataCamp courses. Instead of removing the courses, many contractors for DataCamp, including Montani, took to Twitter after DataCamp published the blog, urging students to boycott the very courses they designed. https://twitter.com/noamross/status/1116667602741485571 https://twitter.com/daniellequinn88/status/1117860833499832321 https://twitter.com/_tetration_/status/1118987968293875714 Instructors put financial pressures on the company by boycotting their own courses. They also wanted to get the executive responsible for such misbehaviour account for his actions, compensate the victim and compensate those who were fired for complaining—this may ultimately undercut DataCamp’s bottom line. Influential open-source communities, including RStudio, SatRdays, and R-Ladies, have cut all ties with DataCamp to show disappointment with the lack of serious accountability.. CEO steps down “indefinitely” from his role and accepts his mistakes Today Jonathan Cornelissen, accepted his mistake and wrote a public apology for his inappropriate behaviour. He writes, “I want to apologize to a former employee, our employees, and our community. I have failed you twice. First in my behavior and second in my failure to speak clearly and unequivocally to you in a timely manner. I am sorry.” He has also stepped down from his position as the company CEO indefinitely until there is complete review of company’s environment and culture. While it is in the right direction, unfortunately this apology comes to the community very late and is seen as a PR move to appease the backlash from the data science community and other instructors. https://twitter.com/mrsnoms/status/1121235830381645824 9 Data Science Myths Debunked 30 common data science terms explained Why is data science important?
Read more
  • 0
  • 0
  • 3748

article-image-mongodb-is-going-to-acquire-realm-the-mobile-database-management-system-for-39-million
Richard Gall
25 Apr 2019
3 min read
Save for later

MongoDB is going to acquire Realm, the mobile database management system, for $39 million

Richard Gall
25 Apr 2019
3 min read
MongoDB, the open source NoSQL database, is going to acquire mobile database platform Realm. The purchase is certainly one with clear technological and strategic benefits for both companies - and with MongoDB paying $39 million for a company that has up to now raised $40 million since its launch in 2011, it's clear that this is a move that isn't about short term commercial gains. It's important to note that the acquisition is not yet complete. It's expected to close in January 2020 at the end of the second quarter MongoDB's fiscal year. Further details about the acquisition and what it means for both products, will be revealed at MongoDB World in June. Why is MongoDB acquiring Realm? In the materials that announce the launch there's a lot of talk about the alignment between the two projects. "The best thing in the world is when someone just gets you, and you get them" MongoDB CTO Eliot Horowitz wrote in a blog post accompanying the release, "because when you share a vision of the world like that, you can do incredible things together. That’s exactly the case with MongoDB and Realm." At a more fundamental level the acquisition allows MongoDB to do a number of things. It can reach a new community of developers  working primarily in mobile development (according to the press release Realm has 100,000 active users), but it also allows MongoDB to strengthen its capabilities as cloud evolves to become the dominant way that applications are built and hosted. According to Dev Ittycheria, MongoDB President and CEO, Realm "is a natural fit for our global cloud database, MongoDB Atlas, as well as a complement to Stitch, our serverless platform." Serverless might well be a nascent trend at the moment, but the level of conversation and interest around it indicates that it's going to play a big part in application developers lives in the months and years to come. What's in it for Realm? For Realm, the acquisition will give the project access to a new pool of users. With backing from MongoDB, is also provides robust foundations for the project to extend its roadmap and even move faster than it previously would have been able to. Realm CEO David Ratner wrote yesterday (April 24) that: "The combination of MongoDB and Realm will establish the modern standard for mobile application development and data synchronization for a new generation of connected applications and services. MongoDB and Realm are fully committed to investing in the Realm Database and the future of data synchronization, and taking both to the next phase of their evolution. We believe that MongoDB will help accelerate Realm’s product roadmap, go-to-market execution, and support our customers’ use cases at a whole new level of global operational scale." A new chapter for MongoDB? 2019 hasn't been the best year for MongoDB so far. The project withdrew its submission for its controversial Server Side Public License last month following news that Red Hat was dropping it from Enterprise Linux and Fedora. This brought an initiative that the leadership viewed as strategically important in defending MongoDB's interests to a dramatic halt. However, the Realm acquisition sets up a new chapter and could go some way in helping MongoDB bolster itself for a future that it has felt uncertain about.
Read more
  • 0
  • 0
  • 3841

article-image-tesla-autonomy-day-takeaways-full-self-driving-computer-robotaxis-launching-next-year-and-more
Bhagyashree R
24 Apr 2019
6 min read
Save for later

Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more

Bhagyashree R
24 Apr 2019
6 min read
This Monday, Tesla’s “Autonomy Investor Day” kickstarted at its headquarters in Palo Alto. At this invitation-only event, Elon Musk, the CEO of Tesla, with his fellow executives, talked about its new microchip, robotaxis hitting the road by next year, and more. Here are some of the key takeaways from the event: The Full Self-Driving (FSD) computer Tesla shared details of its new custom chip, Full Self-Driving (FSD) computer, previously known as Autopilot Hardware 3.0. Elon Musk, the CEO of Tesla, believes that the FSD computer is “the best chip in the world…objectively.” Tesla replaced Nvidia’s Autopilot 2.5 computer with its own custom chip for Model S and Model X about a month ago. For Model 3 vehicles this change happened about 10 days ago. Musk said, “All cars being produced all have the hardware necessary — computer and otherwise — for full self-driving. All you need to do is improve the software.” FSD is a high-performance, special-purpose chip built by Samsung with main focus on autonomy and safety. It comes with a factor of 21 improvements in frame per second processing as compared to the previous generation Tesla Autopilot hardware, which was powered by Nvidia hardware. The company further shared that retrofits will be offered to current Tesla owners who bought the ‘Full Self-Driving package’ in the next few months. Here’s the new Tesla FSD computer: Credits: Tesla Musk shared that the company has already started working on a next-generation chip. The design of FSD was completed within 2 years of time and Tesla is now about halfway through the design of the next-generation chip. Musk’s claims of building the best chip can be taken with a pinch of salt as it could surely upset some engineers from Nvidia, Mobileye, and other companies who have been in the chip manufacturing market for a long time. Nvidia, in a blog post, along with applauding Tesla for its FSD computer, highlighted “few inaccuracies” in the comparison made by Musk during the event: “It’s not useful to compare the performance of Tesla’s two-chip Full Self Driving computer against NVIDIA’s single-chip driver assistance system. Tesla’s two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.” While pointing out the “inaccuracies”, Nvidia did miss out the key point here: the power consumption. “Having a system that can do 160 TOPS means little if it uses 500 watts while tesla's 144 TOPS system uses 72 watts,” a Redditor said. Robotaxis will hit the roads in 2020 Musk shared that within the next year or so we will see Tesla’s robotaxis coming into the ride-hailing market giving competition to Uber and Lyft. Musk made a bold claim saying that though, similar to other ride-hailing services, the robotaxis will allow users to hail a Tesla for a ride, they will not have drivers. Musk announced, “I feel very confident predicting that there will be autonomous robotaxis from Tesla next year — not in all jurisdictions because we won’t have regulatory approval everywhere.” He did not share many details on what regulations he was talking about. The service will allow Tesla-owners to add their properly equipped vehicles to Tesla’s own ride-sharing app, following a similar business model as Uber or Airbnb. The company will provide a dedicated number of robotaxis in areas where there are not enough loanable cars. Musk predicted that the average robotaxi will be able to yield $30,000 in gross profit per car, annually. Of this profit, about 25% to 30% will go to Tesla, therefore, an owner will be able to make $21,000 a year. Musk's plans for launching robotaxis next year looks ambitious. Experts and the media are quite skeptical about his plan. The Partners for Automated Vehicle Education (PAVE) industry group tweeted: https://twitter.com/PAVECampaign/status/1120436981220237312 https://twitter.com/RobMcCargow/status/1120961462678245376 Musk says “Anyone relying on lidar is doomed” Musk has been pretty vocal about his dislike towards LIDAR. He calls this technology “a crutch for self-driving cars”. When this topic came up at the event, Musk said: “Lidar is a fool’s errand. Anyone relying on lidar is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.” LIDAR, which stands for Light Direction and Ranging, is used by Uber, Waymo, Cruise, and many other self-driving vehicles manufacturing companies. LIDAR projects low-intensity, harmless, and invisible laser beams at a target, or in the case of self-driving cars, all around. The reflected pulses are then measured for return time and wavelength to calculate the distance of an object from the sender. Lidar is capable of producing pretty detailed visualizations of the environment around a self-driving car. However, Tesla believes that this same functionality can be facilitated by cameras. According to Musk, cameras can provide much better resolutions and when combined with the neural net can predict depth very well. Andrej Karpathy, Tesla’s Senior Director of AI, took to the stage to explain the limitations of Lidar. He said, “In that sense, lidar is really a shortcut. It sidesteps the fundamental problems, the important problem of visual recognition, that is necessary for autonomy. It gives a false sense of progress and is ultimately a crutch. It does give, like, really fast demos!”. Karpathy further added, “You were not shooting lasers out of your eyes to get here.” While true, many felt that the reasoning is completely flawed. A Redditor in a discussion thread, said, “Musk's argument that "you drove here using your own two eyes with no lasers coming out of them" is reductive and flawed. It should be obvious to anyone that our eyes are more complex than simple stereo cameras. If the Tesla FSD system can reliably perceive depth at or above the level of the human eye in all conditions, then they have done something truly remarkable. Judging by how Andrej Karpathy deflected the question about how well the system works in snowy conditions, I would assume they have not reached that level.” Check out the live stream of the autonomy day on Tesla’s official website. Tesla v9 to incorporate neural networks for autopilot Tesla is building its own AI hardware for self-driving cars Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine  
Read more
  • 0
  • 0
  • 2643
article-image-openai-five-bots-destroyed-human-dota-2-players-this-weekend
Richard Gall
23 Apr 2019
3 min read
Save for later

OpenAI Five bots destroyed human Dota 2 players this weekend

Richard Gall
23 Apr 2019
3 min read
Last week, the team at OpenAI made it possible for humans to play the OpenAI Five bot at Dota 2 online. The results were staggering - over a period of just a few days, from April 18 to April 21, OpenAI Five had a win rate of 99.4%, winning 7,215 games (that includes humans giving up and abandoning their games 3,140 times) and losing only 42. But perhaps we shouldn't be that surprised. The artificial intelligence bot did, after all, defeat OG, one of the best e-sports teams on the planet earlier this month. https://twitter.com/OpenAI/status/1120421259274334209 What does OpenAI Five's Dota 2 dominance tell us about artificial intelligence? The dominance of OpenAI Five over the weekend is important because it indicates that it is possible to build artificial intelligence that can deal with complex strategic decision-making consistently. Indeed, that's what sets this experiment apart from other artificial intelligence gaming challenges - from the showdown with OG to DeepMind's AlphaZero defeating a professional Go and chess players, bots are typically playing individuals or small teams of players. By taking on the world, it would appear that OpenAI have developed an artificial intelligence system that a large group of intelligent humans with specific domain experience have found it consistently difficult to out-think. Learning how to win The key issue when it comes to artificial intelligence and games - Dota 2 or otherwise - is the ability of the bot to learn. One Dota 2 gamer, quoted on a Reddit thread, said "the bots are locked, they are not learning, but we humans are. We will win." This is true - up to a point. The reality is that they aren't locked - they are, in fact, continually learning, processing the consequences of every decision that is made and feeding it into its system. And although adaptability will remain an issue for any artificial intelligence system, the more games it plays and the more strategies it 'learns' it will essentially build adaptability into its system. This is something OpenAI CTO Greg Brockman noted when responding to suggestions that OpenAI Five's tiny proportion of defeats indicates a lack of adaptability. "When we lost at The International (100% vs pro teams), they said it was because Five can’t do strategy. So we trained for longer. When we lose (0.7% vs the entire Internet), they say it’s because Five can’t adapt." https://twitter.com/gdb/status/1119963994754670594 It's important to remember that this doesn't necessarily signal that much about the possibility of Artificial General Intelligence. OpenAI Five's decision making power is centered around a very specific domain - even if it is one that is relatively complex. However, it does highlight that the relationship between video games and artificial intelligence is particularly important. On the one hand, video games are a space that can help us develop AI further and explore the boundaries of what's possible. But equally, AI will likely evolve the way we think about gaming - and esports - too. Read next: How Artificial Intelligence and Machine Learning can turbocharge a Game Developer’s career
Read more
  • 0
  • 0
  • 3262

article-image-ai-now-institute-publishes-a-report-on-the-diversity-crisis-in-ai-and-offers-12-solutions-to-fix-it
Bhagyashree R
22 Apr 2019
7 min read
Save for later

AI Now Institute publishes a report on the diversity crisis in AI and offers 12 solutions to fix it

Bhagyashree R
22 Apr 2019
7 min read
Earlier this month, the AI Now Institute published a report, authored by Sarah Myers West, Meredith Whittaker, and Kate Crawford, highlighting the link between the diversity issue in the current AI industry and the discriminating behavior of AI systems. The report further recommends some solutions to these problems that companies and the researchers behind these systems need to adopt to address these issues. Sarah Myers West is a postdoc researcher at the AI Now Institute and an affiliate researcher at the Berkman-Klein Center for Internet and Society. Meredith Whittaker is the co-founder of the AI Now Institute and leads Google's Open Research Group and the Google Measurement Lab.  Kate Crawford is a Principal Researcher at Microsoft Research and the co-founder and Director of Research at the AI Now Institute. Kate Crawford tweeted about this study. https://twitter.com/katecrawford/status/1118509988392112128 The AI industry lacks diversity, gender neutrality, and bias-free systems In recent years, we have come across several cases of “discriminating systems”. Facial recognition systems miscategorize black people and sometimes fails to work for trans drivers. When trained in online discourse, chatbots easily learn racist and misogynistic language. This type of behavior by machines is actually a reflection of society. “In most cases, such bias mirrors and replicates existing structures of inequality in the society,” says the report. The study also sheds light on gender bias in the current workforce. According to the report, only 18% of authors at some of the biggest AI conferences are women. On the other side of the spectrum are men who cover 80%. The tech giants, Facebook and Google, have a meager 15% and 10% women as their AI research staff. The situation for black workers in the AI industry looks even worse. While Facebook and Microsoft have 4% of their current workforce as black workers, Google stands at just 2.5%. Also, vast majority of AI studies assume gender is binary, and commonly assigns people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity. The report further reveals that, though there have been various “pipeline studies” to check the flow of diverse job candidates, they have failed to show substantial progress in bringing diversity in the AI industry. “The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether,” the report reads. What steps can industries take to address bias and discrimination in AI Systems The report lists 12 recommendations that AI researchers and companies should employ to improve workplace diversity and address bias and discrimination in AI systems. Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender. End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors. Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken. Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees. Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted. Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments. Ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees. Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose. Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise. The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment. AI-related departments and conference committees. Credits: AI Now Institute Bringing diversity in the AI workforce In order to address the diversity issue in the AI industry, companies need to make changes in the current hiring practices. They should have a more equitable focus on under-represented groups. People of color, women, and other under-represented groups should get fair chance to get into senior leadership level of AI companies across all departments. Further opportunities should be created for contractors, temps, and vendors to become full-time employees. To bridge the gender pay gap in the AI industry, it is important that companies maintain transparency regarding the compensation levels, including bonuses and equity, regardless of gender or race. In the past few years, several cases of sexual misconducts involving some of the biggest companies like Google, Microsoft, have come into light because of movements like #MeToo, Google Walkout, and more. These movements gave the victims and other supporting employees  the courage to speak against employees at higher positions who were taking undue advantage of their power. There are cases were the sexual harassment complaints were not taken seriously by the HRs and victims were told to just “get over it”. This is why, companies should  publish harassment and discrimination transparency reports that include information like the number and types of claims made and the actions taken by the company. Academic workplaces should ensure diversity in all AI-related departments and conference committees. In the past, some of the biggest AI conferences like Neural Information Processing Systems conference has failed to provide a welcoming and safer environment for women. In a survey conducted last year, many respondents shared that they have experienced sexual harassment. Women reported persistent advances from men at the conference. The organizers of such conferences should ensure an inclusive and welcoming environment for everyone. Addressing bias and discrimination in AI systems To address bias and discrimination in AI systems, the report recommends to do rigorous testing across the lifecycle of these systems. These systems should have pre-release trials, independent auditing, and monitoring to check bias, discrimination, and other harms. Looking at the social implications of AI systems, just addressing the algorithmic bias is not enough. “The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise,” says the report. While assessing a AI system, researchers and developers should also check whether designing a certain system is required at all, considering the risks it poses. The study calls for re-evaluating the current AI systems used for classifying, detecting, and predicting the race and gender. The idea of identifying a race or gender just by appearance is flawed and can be easily abused. Especially, systems that use physical appearance to find interior states, for instance, those that claim to detect sexuality from headshots. These systems are urgently in need to be checked. To know more in detail, read the full report: Discriminating Systems. Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination Desmond U. Patton, Director of SAFElab shares why AI systems should be a product of interdisciplinary research and diverse teams Google’s Chief Diversity Officer, Danielle Brown resigns to join HR tech firm Gusto
Read more
  • 0
  • 0
  • 3480

article-image-eu-approves-labour-protection-laws-for-whistleblowers-and-gig-economy-workers-with-implications-for-tech-companies
Savia Lobo
17 Apr 2019
5 min read
Save for later

EU approves labour protection laws for ‘Whistleblowers’ and ‘Gig economy’ workers with implications for tech companies

Savia Lobo
17 Apr 2019
5 min read
The European Union approved two new labour protection laws recently. This time, for the two not so hyped sects, the whistleblowers and the ones earning their income via the ‘gig economy’. As for the whistleblowers, with the new law, they receive an increased protection landmark legislation aimed at encouraging reports of wrongdoing. On the other hand, for those working for ‘on-demand’ jobs, thus, termed as the gig economy, the law sets minimum rights and demands increased transparency for such workers. Let’s have a brief look at each of the newly approved by the EU. Whistleblowers’ shield against retaliation On Tuesday, the EU parliament approved a new law for whistleblowers safeguarding them from any retaliation within an organization. The law protects whistleblowers against dismissal, demotion and other forms of punishment. “The law now needs to be approved by EU ministers. Member states will then have two years to comply with the rules”, the EU proposal states. Transparency International calls this as “pathbreaking legislation”, which will also give employees a "greater legal certainty around their rights and obligations". The new law creates a safe channel which allows the whistleblowers to report of an EU law breach both within an organization and to public authorities. “It is the first time whistleblowers have been given EU-wide protection. The law was approved by 591 votes, with 29 votes against and 33 abstentions”, the BBC reports. In cases where no appropriate action is taken by the organization’s authorities even after reporting, whistleblowers are allowed to make public disclosure of the wrongdoing by communicating with the media. European Commission Vice President, Frans Timmermans, says, “potential whistleblowers are often discouraged from reporting their concerns or suspicions for fear of retaliation. We should protect whistleblowers from being punished, sacked, demoted or sued in court for doing the right thing for society.” He further added, “This will help tackle fraud, corruption, corporate tax avoidance and damage to people's health and the environment.” “The European Commission says just 10 members - France, Hungary, Ireland, Italy, Lithuania, Malta, the Netherlands, Slovakia, Sweden, and the UK - had a "comprehensive law" protecting whistleblowers”, the BBC reports. “Attempts by some states to water down the reform earlier this year were blocked at an early stage of the talks with Luxembourg, Ireland, and Hungary seeking to have tax matters excluded. However, a coalition of EU states, including Germany, France, and Italy, eventually prevailed in keeping tax revelations within the proposal”, the Reuters report. “If member states fail to properly implement the law, the European Commission can take formal disciplinary steps against the country and could ultimately refer the case to the European Court of Justice”, BBC reports. To know more about this new law for whistleblowers, read the official proposal. EU grants protection to workers in Gig economy (casual or short-term employment) In a vote on Tuesday, the Members of the European Parliament (MEP) announced minimum rights for workers with on-demand, voucher-based or platform jobs, such as Uber or Deliveroo. However, genuinely self-employed workers would be excluded from the new rules. “The law states that every person who has an employment contract or employment relationship as defined by law, collective agreements or practice in force in each member state should be covered by these new rights”, BBC reports. “This would mean that workers in casual or short-term employment, on-demand workers, intermittent workers, voucher-based workers, platform workers, as well as paid trainees and apprentices, deserve a set of minimum rights, as long as they meet these criteria and pass the threshold of working 3 hours per week and 12 hours per 4 weeks on average”, according to EU’s official website. For this, all workers need to be informed from day one as a general principle, but no later than seven days where justified. Following are the specific set of rights to cover new forms of employment includes: Workers with on-demand contracts or similar forms of employment should benefit from a minimum level of predictability such as predetermined reference hours and reference days. They should also be able to refuse, without consequences, an assignment outside predetermined hours or be compensated if the assignment was not cancelled in time. Member states shall adopt measures to prevent abusive practices, such as limits to the use and duration of the contract. The employer should not prohibit, penalize or hinder workers from taking jobs with other companies if this falls outside the work schedule established with that employer. Enrique Calvet Chambon, the MEP responsible for seeing the law through, said, “This directive is the first big step towards the implementation of the European Pillar of Social Rights, affecting all EU workers. All workers who have been in limbo will now be granted minimum rights thanks to this directive, and the European Court of Justice rulings, from now on no employer will be able to abuse the flexibility in the labour market.” To know more about this new law on Gig economy, visit EU’s official website. 19 nations including The UK and Germany give thumbs-up to EU’s Copyright Directive Facebook discussions with the EU resulted in changes of its terms and services for users The EU commission introduces guidelines for achieving a ‘Trustworthy AI’
Read more
  • 0
  • 0
  • 1629
article-image-wikileaks-founder-julian-assange-arrested-for-conspiracy-to-commit-computer-intrusion
Savia Lobo
12 Apr 2019
6 min read
Save for later

Wikileaks founder, Julian Assange, arrested for “conspiracy to commit computer intrusion”

Savia Lobo
12 Apr 2019
6 min read
Julian Assange, the Wikileaks founder, was arrested yesterday in London, in accordance with the U.S./UK Extradition Treaty. He was charged with assisting Chelsea Manning, a former intelligence analyst in the U.S. Army, to crack a password on a classified U.S. government computer. The indictment states that in March 2010, Assange assisted Manning by cracking password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications. Being an intelligence analyst, Manning had access to certain computers and used these to download classified records to transmit to WikiLeaks. “Cracking the password would have allowed Manning to log on to the computers under a username that did not belong to her. Such a deceptive measure would have made it more difficult for investigators to determine the source of the illegal disclosures”, the indictment report states. “Manning confessed to leaking more than 725,000 classified documents to WikiLeaks following her deployment to Iraq in 2009—including battlefield reports and five Guantanamo Bay detainee profiles”, Gizmodo reports. In 2013, Manning was convicted of leaking the classified U.S. government documents to WikiLeaks. She was jailed in early March this year as a recalcitrant witness after she refused to answer the grand jury’s questions. According to court filings, after Manning’s arrest, she was held in solitary confinement in a Virginia jail for nearly a month. Following Assange’s arrest, a Swedish software developer and digital privacy activist, Ola Bini, who is allegedly close to Wikileaks founder Julian Assange has also been detained. “The official said they are looking into whether he was part of a possible effort by Assange and Wikileaks to blackmail Ecuador’s President, Lenin Moreno”, the Washington Post reports. Bini was detained at Quito’s airport as he was preparing to board a flight for Japan. Martin Fowler, a British software developer and renowned author and speaker, tweeted on Bini’s arrest. He said that Bini is a strong advocate and developer supporting privacy, and has not been able to speak to any lawyers. https://twitter.com/martinfowler/status/1116520916383621121 Following Assange’s arrest, Hillary Clinton, who was the nominee for the 2016 Presidential elections, said, “The bottom line is that he has to answer for what he has done”. “WikiLeaks’ publication of Democratic emails stolen by Russian intelligence officers during the 2016 election season hurt Clinton’s presidential campaign”, the Washington Post reports. Assange, who is an Australian citizen, was dragged out of Ecuador’s embassy in London after his seven-year asylum was revoked. He was granted Asylum by former Ecuadorian President Rafael Correa in 2012 for publishing sensitive information about U.S. national security interests. Australian PM, Scott Morrison told Australian Broadcasting Corp. the charge is a “matter for the United States” and has nothing to do with Australia. He was granted asylum just after “he was released on bail while facing extradition to Sweden on sexual assault allegations. The accusations have since been dropped but he was still wanted for jumping bail”, the Washington Post states. A Swedish woman alleged that she was raped by Julian Assange during a visit to Stockholm in 2010. Post Assange’s arrest on Thursday, Elisabeth Massi Fritz, the lawyer for the unnamed woman, said in a text message sent to The Associated Press that “we are going to do everything” to have the Swedish case reopened “so Assange can be extradited to Sweden and prosecuted for rape.” She further added, “no rape victim should have to wait nine years to see justice be served.” “In 2017, Sweden’s top prosecutor dropped a long-running inquiry into a rape claim against Assange, saying there was no way to have Assange detained or charged within a foreseeable future because of his protected status inside the embassy”, the Washington Post reports. In a tweet, Wikileaks posted a photo of Assange with the words: “This man is a son, a father, a brother. He has won dozens of journalism awards. He’s been nominated for the Nobel Peace Prize every year since 2010. Powerful actors, including CIA, are engaged in a sophisticated effort to dehumanize, delegitimize and imprison him. #ProtectJulian.” https://twitter.com/wikileaks/status/1116283186860953600 Duncan Ross, a data philanthropist, tweeted, “Random thoughts on Assange: 1) journalists don’t have to be nice people but 2) being a journalist (if he is) doesn’t put you above the law.” https://twitter.com/duncan3ross/status/1116610139023237121 Edward Snowden, a former security contractor who leaked classified information about U.S. surveillance programs, says the arrest of WikiLeaks founder Julian Assange is a blow to media freedom. “Assange’s critics may cheer, but this is a dark moment for press freedom”, he tweets. According to the Washington Post, in an interview with The Associated Press, Rafael Correa, Ecuador’s former president, was harshly critical of his successor’s decision to expel the Wikileaks founder from Ecuador’s embassy in London. He said that “although Julian Assange denounced war crimes, he’s only the person supplying the information.” Correa said “It’s the New York Times, the Guardian and El Pais publishing it. Why aren’t those journalists and media owners thrown in jail?” Yanis Varoufakis, Economics professor and former Greek finance minister, tweeted, “It was never about Sweden, Putin, Trump or Hillary. Assange was persecuted for exposing war crimes. Will those duped so far now stand with us in opposing his disappearance after a fake trial where his lawyers will not even now the charges?” https://twitter.com/yanisvaroufakis/status/1116308671645061120 The Democracy in Europe Movement 2025 (@DiEM_25) tweeted that Assange’s arrest is “a chilling demonstration of the current disregard for human rights and freedom of speech by establishment powers and the rising far-right.” The movement has also put a petition against Assange’s extradition. https://twitter.com/DiEM_25/status/1116379013461815296 Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council A security researcher reveals his discovery on 800+ Million leaked Emails available online Leaked memo reveals that Facebook has threatened to pull investment projects from Canada and Europe if their data demands are not met
Read more
  • 0
  • 0
  • 3285

article-image-katie-bouman-unveils-the-first-ever-black-hole-image-with-her-brilliant-algorithm
Amrata Joshi
11 Apr 2019
11 min read
Save for later

Katie Bouman unveils the first ever black hole image with her brilliant algorithm

Amrata Joshi
11 Apr 2019
11 min read
Remember how we got to see the supermassive black hole in the movie Interstellar? Well, that wasn’t for real. We know that black holes end up sucking everything that’s too close to it, even light for that matter. Black hole’s event horizon cast a shadow and that shadow is enough for answering a lot of questions attached to black hole theory. And scientists and researchers have been working towards it since years to get that one image to give an angle to their research. And finally comes the biggest news that a team of astronomers, engineers, researchers and scientists have managed to capture the first ever image of a black hole, which is located in a distant galaxy. It is three million times the size of the Earth and it measures 40 billion Km across. The team describes it as "a monster" and was photographed by a network of eight telescopes across the world. In this article, we give you a glimpse of how did the image of the black hole got captured? Katie Bouman, a PhD student at MIT appeared at TED Talks and discussed the efforts taken by the team of researchers, engineers, astronomers and scientists to capture the first ever image of the black hole. Katie is a part of an international team of astronomers who worked for creating the world’s largest telescope, Event Horizon Telescope to click the first ever picture of the black hole. She led the development of a computer programme that made this impossible, possible! She started working on the algorithm three years ago while she was a graduate student. https://twitter.com/jenzhuscott/status/1115987618464968705 Katie wrote in the caption to one of the Facebook post, "Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed." https://twitter.com/MIT_CSAIL/status/1116035007406116864 Further, she explains how the stars we see in the sky basically orbit an invisible object. And according to the astronomers, the only thing that can cause this motion of the stars is a supermassive black hole. Zooming in at radio wavelengths to see a ring of light “Well, it turns out that if we were to zoom in at radio wavelengths, we'd expect to see a ring of light caused by the gravitational lensing of hot plasma zipping around the black hole. Is it possible to see something that, by definition, is impossible to see? ” -Katie Bouman If we closely look at it, we can see that the black hole casts a shadow on the backdrop of bright material that carves out a sphere of darkness. It is a bright ring that reveals the black hole's event horizon, where the gravitational pull becomes so powerful that even light can’t escape. Einstein's equations have predicted the size and shape of this ring and taking a picture of it would help to verify that these equations hold in the extreme conditions around the black hole. Capturing black hole needs a telescope the size of the Earth “So how big of a telescope do we need in order to see an orange on the surface of the moon and, by extension, our black hole? Well, it turns out that by crunching the numbers, you can easily calculate that we would need a telescope the size of the entire Earth.” -Katie Bouman Bouman further explains that black hole is so far away from Earth that this ring appears incredibly small, as small as an orange on the surface of the moon. And this makes it difficult to capture the photo of the black hole. There are fundamental limits to the smallest objects that we can see because of diffraction. So the astronomers realized that they need to make their telescope bigger and bigger. Even the most powerful optical telescopes couldn’t get close to the resolution necessary to image on the surface of the moon. She showed one of the highest resolution images ever taken of the moon from Earth to the audience which contained around 13,000 pixels, and each pixel contained over 1.5 million oranges. Capturing the black hole turned into reality by connecting telescopes “And so, my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements.” -Katie Bouman According to Bouman, we would require a telescope as big as earth’s size to see an orange on the surface of the moon. Capturing a black hole seemed to be imaginary back then as it was nearly impossible to have a powerful telescope. Bouman highlighted the famous words of Mick Jagger, "You can't always get what you want, but if you try sometimes, you just might find you get what you need." Capturing the black hole turned into a reality by connecting telescopes from around the world. Event Horizon Telescope, an international collaboration created a computational telescope the size of the Earth which was capable of resolving structure on the scale of a black hole's event horizon. The setup was such that each telescope in the worldwide network worked together. The researcher teams at each of the sites collected thousands of terabytes of data. This data then processed in a lab in Massachusetts. Let’s understand this in depth by assuming that we can build an Earth sized telescope! Further imagining that Earth is a spinning disco ball and each of the mirror of the ball can collect light that can be combined together to form a picture. If most of those mirrors are removed then a few will remain. In this case, it is still possible to combine this information together, but now there will be a lot of holes. The remaining mirrors represent the locations where these telescopes have been setup. Though this seems like a small number of measurements to make a picture from but it is effective. The light gets collected at a few telescope locations but as the Earth rotates, other new measurements also get explored. So, as the disco ball spins, the mirrors change locations and the astronomers get to observe different parts of the image. The imaging algorithms developed by the experts, scientists and researchers fill in the missing gaps of the disco ball in order to reconstruct the underlying black hole image. Katie Bouman said, “If we had telescopes located everywhere on the globe -- in other words, the entire disco ball -- this would be trivial. However, we only see a few samples, and for that reason, there are an infinite number of possible images that are perfectly consistent with our telescope measurements.” According to Bouman, not all the images are created equal. So some of those images look more like what the astronomers, scientists and researchers think of as images as compared to others. Bouman’s role in helping to take the first image of the black hole was to design the algorithms that find the most relevant or reasonable image that fits the telescope measurements. The imaging algorithms developed by Katie used the limited telescope data to guide the astronomers to a picture. With the help of these algorithms, it was possible to bring together the pieces of pictures from the sparse and noisy data. How was the algorithm used in creation of the black hole image “I'd like to encourage all of you to go out and help push the boundaries of science, even if it may at first seem as mysterious to you as a black hole.” -Katie Bouman There is an infinite number of possible images that perfectly explain the telescope measurements and the astronomers and researchers have to choose between them. This is possible by ranking the images based upon how likely they are to be the black hole image and further selecting the one that's most likely. Bouman explained it with the help of an example, “Let's say we were trying to make a model that told us how likely an image were to appear on Facebook. We'd probably want the model to say it's pretty unlikely that someone would post this noise image on the left, and pretty likely that someone would post a selfie like this one on the right. The image in the middle is blurry, so even though it's more likely we'd see it on Facebook compared to the noise image, it's probably less likely we'd see it compared to the selfie.” While talking about the images from the black hole, according to Katie it gets confusing for the astronomers and researchers as they have never seen a black hole before. She further explained how difficult it is to rely on any of the previous theories for these images. It is even difficult to completely rely on the images of the simulations for comparison. She said, “What is a likely black hole image, and what should we assume about the structure of black holes? We could try to use images from simulations we've done, like the image of the black hole from "Interstellar," but if we did this, it could cause some serious problems. What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the center of our galaxy.” According to Bouman, different types of images have distinct features, so it is quite possible to identify the difference between black hole simulation images and images captured by the team. So the researchers need to let the algorithms know what images look like without imposing one type of image features. And this can be done by imposing the features of different kinds of images and then looking at how the image type we assumed affects the reconstruction of the final image. The researchers and astronomers become more confident about their image assumptions if the images' types produce a very similar-looking image. She said, “This is a little bit like giving the same description to three different sketch artists from all around the world. If they all produce a very similar-looking face, then we can start to become confident that they're not imposing their own cultural biases on the drawings.” It is possible to impose different image features by using pieces of existing images. So the astronomers and researchers took a large collection of images and broke them down into little image patches. And then they treated each image patch like piece of a puzzle. They use commonly seen puzzle pieces to piece together an image that also fits in their telescope measurements. She said, “Let's first start with black hole image simulation puzzle pieces. OK, this looks reasonable. This looks like what we expect a black hole to look like. But did we just get it because we just fed it little pieces of black hole simulation images?” If we take a set of puzzle pieces from everyday images, like the ones we take with our own personal camera then we get the same image from all different sets of puzzle pieces. And we then become more confident that the image assumptions made by us aren't biasing the final image. According to Bouman, another thing that can be done is take the same set of puzzle pieces like the ones derived from everyday images and then use them to reconstruct different kinds of source images. Bouman said, “So in our simulations, we pretend a black hole looks like astronomical non-black hole objects, as well as everyday images like the elephant in the center of our galaxy.” And when the results of the algorithms look very similar to the simulated image then researchers and astronomers become more confident about their algorithms. She emphasized that all of these pictures were created by piecing together little pieces of everyday photographs, like the ones we take with own personal camera. So an image of a black hole which we have never seen before can be created by piecing together pictures we see regularly like images of people, buildings, trees, cats and dogs. She concluded by appreciating the efforts taken by her team, “But of course, getting imaging ideas like this working would never have been possible without the amazing team of researchers that I have the privilege to work with. It still amazes me that although I began this project with no background in astrophysics. But big projects like the Event Horizon Telescope are successful due to all the interdisciplinary expertise different people bring to the table.” This project will surely encourage many researchers, engineers, astronomers and students who are under dark and not confident of themselves but have the potential to make the impossible, possible. https://twitter.com/fchollet/status/1116294486856851459 Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers Using Genetic Algorithms for optimizing your models [Tutorial]  
Read more
  • 0
  • 0
  • 21803