Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Data

1209 Articles
Matthew Emerick
22 Sep 2020
1 min read
Save for later

Announcing: New Power BI experiences in Microsoft Teams from Microsoft Power BI Blog | Microsoft Power BI

Matthew Emerick
22 Sep 2020
1 min read
The way we work is changing dramatically. It’s more connected, collaborative, and often done remotely. Organizations need tools to help everyone infuse data into every decision. We’re excited to announce new Power BI integrations for Microsoft Teams to make it easier to discover and use data within your organization.  
Read more
  • 0
  • 0
  • 809

Matthew Emerick
22 Sep 2020
1 min read
Save for later

Driving a data culture in a world of remote everything from Microsoft Power BI Blog | Microsoft Power BI

Matthew Emerick
22 Sep 2020
1 min read
During this new Microsoft Ignite format, with 48 hours of hours of digital sessions and interactions where thousands of IT professionals will come together, we have several exciting innovations to announce that will help customers drive clarity when they need it most.
Read more
  • 0
  • 0
  • 812

article-image-introducing-a-new-way-to-bring-tableau-analytics-into-salesforce-from-whats-new
Matthew Emerick
18 Sep 2020
4 min read
Save for later

Introducing a new way to bring Tableau analytics into Salesforce from What's New

Matthew Emerick
18 Sep 2020
4 min read
Geraldine Zanolli Developer Evangelist Spencer Czapiewski September 18, 2020 - 10:15pm September 21, 2020 At Tableau, we believe that our customers need analytics in their workflows, and Salesforce customers are no exception. While there are existing ways for our customers to embed Tableau content inside Salesforce, the new Tableau Viz Lightning web component makes it easy to integrate Tableau visualizations into Salesforce in just a few clicks. Today, we are excited to release the Tableau Viz Lightning web component, available now on Salesforce AppExchange. By using it, any Salesforce admin or developer can integrate any Tableau dashboard into a Salesforce Lightning page. You may have already seen the component used in Work.com, Salesforce’s offering of advisory services and technology solutions to help companies and communities safely reopen in the COVID-19 environment. The Work.com team used the Tableau Viz Lightning web component to add the Global COVID-19 Tracker dashboard to the Workplace Command Center, a single source of truth that gives organizations a 360-degree view of return-to-work readiness across all of their locations, employees, and visitors. “Surfacing Tableau dashboards in the Command Center illustrates the power and convenience of the ‘single pane of glass,’” Xander Mitman, Director of Product Management at Salesforce shared. “Best of all, if customers want to add more Tableau dashboards—either public or proprietary—it only takes a few clicks to make those changes. The Tableau Viz Lightning web component makes it fast and easy for business technology teams to take an agile approach to figure out what makes end users most efficient and productive.” The Work.com team used the Tableau Viz Lightning web component to add the Global COVID-19 Tracker dashboard Easy embedding for Salesforce Any Salesforce user can visit AppExchange and install the Tableau Viz Lightning web component in their org. With three clicks, the Lightning web component is ready to be used in Salesforce. Then, Salesforce admins can drag and drop the Lightning web component on a page. Users will need to get the URL of the visualization they want to embed from Tableau Online, Server, or Public, and then customize the look and feel by adjusting the height or showing the Tableau toolbar. Furthermore, to keep users in their workflow, two filtering options are available on Record Pages (such as an Account or Opportunity page): Context filtering allows users to filter the visualization based on the record they are on at the moment. Advanced filtering lets users define their own filter based on the visualization they are embedding and the information on the page. To learn more about how to configure the Tableau Viz Lightning web component, check out Embed Tableau Views in Salesforce in Help. In the same spirit of making the user experience easier, we also released new help articles on setting up single sign-on (SSO) for the Tableau Viz Lightning web component, which currently supports SAML. For our fully native and deeply integrated analytics solution for Salesforce, check out Einstein Analytics. Developers, build your own solution on top of the Tableau Viz Lightning web component Each deployment of Tableau + Salesforce is different—different content, consumers, use cases, etc. We recognize that the Tableau Viz Lightning web component isn't a one-size-fits-all solution, and that is why you can access the full Lightning web component as an open-source project. Developers can build on top of our Tableau Viz Lightning web component by embedding it in their own Lightning web component. One advantage of using composition to build a component is that developers can benefit from the improvements we make to the Tableau Viz Lightning web component without having to change their code. We released the Tableau Viz Lightning web component with one sample code available on GitHub—look for more coming soon. Install the Tableau Viz Lightning web component from AppExchange to get Tableau inside Salesforce today!
Read more
  • 0
  • 0
  • 606
Banner background image

article-image-we-got-tableau-certified-you-can-too-from-whats-new
Matthew Emerick
18 Sep 2020
5 min read
Save for later

We got Tableau certified, you can too! from What's New

Matthew Emerick
18 Sep 2020
5 min read
Keri Harling Senior Copywriter; Tableau Software Hannah Kuffner September 18, 2020 - 7:32pm September 18, 2020 Data skills are important now more than ever. Whether you just started at your university or are finishing up your final year, there’s always something new to learn. Students are eligible to receive free Tableau licenses, eLearning, and 20% off Tableau Desktop Specialist Certification through Tableau Academic Programs. To help set students up for success, we sat down with two amazing women on different data journeys to hear their advice on preparing for the Tableau Certification exam with a step-by-step guide. Spoiler: you’ll crush it. Bergen Schmetzer, Tableau Academic Programs: I’ve been at Tableau for four years, and have found my happy place on the Tableau Academic team. I was introduced to Tableau in my junior year of college, and it truly changed the way I look at data and analytics. The best advice I can give to students looking to start their analytics journey is just to take that first step. If you are feeling some sense of fear—that’s GOOD! You are beginning something new and unfamiliar, and that’s excitingly scary. Leveling up your skills, especially in data analytics, doesn’t happen overnight. What I love about Tableau is the focus on supporting and elevating the people in our Community. Our goal is to provide people with the resources and skills to empower themselves. Plus, we are passionate about celebrating the success of like-minded data rockstars. Kelly Nesenblatt, Student: I’m a senior at the University of Arizona and preparing to enter the workforce out of college. I saw a huge need for data skills at the companies I was interested in but didn’t know where to start. I knew Tableau’s Academic Programs offer Tableau Desktop, Prep, and eLearning for free to students, and I recently found out about the Desktop Specialist Certification discount. Not only was this an opportunity to add a certification to my resume, but it was also a great reason to strengthen my data skills. If I had one piece of advice to share—be confident in what you know. If you have prepared and are comfortable with the platform, the Specialist exam will greatly benefit you. Since passing the Specialist exam, my goal is to complete the Associate and Professional levels next. Steps to pass the Certification Exam: 1. Join Tableau for Students or Tableau for Teaching Program We’ve helped over one million students and instructors find empowerment in Tableau. Students and instructors can receive free licenses and eLearning through our Academic Programs. 2. Schedule your exam It may sound crazy but schedule your exam first. It's counter-intuitive, but setting a deadline for yourself will drive you to study. After you’ve been verified as a student, you will receive a 20% discount off the Tableau Certification Exam. The discount applies automatically during checkout. Your exam will be valid for six months after your purchase date, and you can reschedule anytime but no later than 24 hours before the exam start time. 3. Download Tableau Desktop, access free eLearning You’ve activated your Tableau license, scheduled your exam, and now it's time to study. eLearning is one of the best places to begin preparing for your exam. We recommend starting with Desktop I to get familiar with the terminology and basics of Tableau. Completing Desktop I takes around 10 hours, but since it’s self-paced, you can go at whatever speed is comfortable for you. We don’t mind waiting for greatness. 4. Practice makes perfect We have a TON of support materials outside of eLearning to help prepare you for the exam. See below for some of our favorite go-to study resources. Training videos—we have hundreds of videos ranging from quick tips to deep dives in Tableau. Tableau for Student Guide—Maria Brock, a Tableau Student Ambassador, put together an entire website dedicated to students looking to learn about Tableau. She spoils us! 5. Get inspired by the Tableau Community The Tableau Community is a group of brilliant Tableau cheerleaders. They love seeing people find the magic within Tableau, and are an amazing support system. You can get to know our Community in several ways: Read how other students use Tableau or hear from Tableau interns through our Generation Data blog series. Looking for specific answers? Our Student Ambassadors are Tableau Champions at their university and assist other students in their Tableau journey. Connect to the people and information you need most. Our global Community— Tableau Community Forums—actively answers your questions—from dashboard designs to tips and tricks, we’re here to help. Check out Tableau Public to see some incredible vizzes from members of the Community. 6. Day of Exam It’s normal to have day-of-exam jitters, but if you’ve leveraged some of the resources we’ve shared in this blog, you’ve got nothing to worry about. Double-check your systems are set up and ready for the exam, choose an environment with a reliable internet connection, and make sure you will be undisturbed throughout the exam period. The exam is timed, so it’s important to remember you can always flag questions if you get stuck and come back to them later. Don’t let one tricky question play mind games with you and make you lose confidence—you’ve got this. 7. You’ve leveled UP! Celebrate your certification. If you passed, it's time to show it off. Share your well-deserved badge on social media and use the hashtag #CertifiablyTableau. Certifications are an identifiable way to demonstrate your data know-how and willingness to invest in your future. Having this certification under your belt will make you stand out amongst your peers to future employers. If you didn't pass the exam the first time—don't get discouraged. It happens to the best of us. The second time's a charm'. Join our Tableau for Students program to get started today and receive 20% off the Tableau Desktop Specialist exam.
Read more
  • 0
  • 0
  • 767

Matthew Emerick
17 Sep 2020
4 min read
Save for later

Continuing to invest in our unique approach to Data Management from What's New

Matthew Emerick
17 Sep 2020
4 min read
Tracy Rodgers Senior Product Marketing Manager Hannah Kuffner September 17, 2020 - 5:33pm September 17, 2020 It’s hard to believe that a year ago today, we launched Tableau Catalog, part of our Data Management offering. It was the first time our customers were really able to see our unique vision for self-service data management in practice. At Tableau, we believe that Data Management is something that everyone, both IT and the business, should benefit from—and we built an integrated solution that reflects that belief. And since launch, we’ve continued to deliver more features and more value to organizations of all sizes. Tableau Catalog provides data quality warnings, helpful metadata, lineage, and more—all in the context of analysis. Self-service requires trust, visibility, and governance Data Management solutions have been in the market for 30+ years. However, they’ve always focused on solving data management problems from the perspective of IT. This can lead to data being duplicated, mistrusted, or too locked down. In today’s world, we have more data, in more formats, at different levels (from enterprise to departmental to individual data needs), and in more places than ever before. At the same time, more people are expected to use data to do their jobs effectively and efficiently. This is where we’re seeing a gap. Everyone in an organization needs to be able to find, access, and trust the right data, so that they can do their jobs effectively—while using secure data, responsibly. There are solutions to address data preparation, cataloging, and analytics. And while they are extremely valuable in their own right and address specific problems, there aren’t many vendors that bring it all together. Tableau has created a convergence for data management—helping solve the most prominent challenges all under one platform with a successful data management experience, regardless of who the user is. That’s been our vision even before we released Tableau Catalog last September. Now, we’re seeing people leverage it. With Tableau Data Management, organizations are providing more visibility and trust in data analysis than ever before. Whether it’s a CEO reviewing their finance dashboard or the data steward doing maintenance on their server, everyone can better understand what the data means, where it’s coming from, and whether it’s up to date or not. Where enterprise data catalogs help provide a comprehensive catalog of the data in your ecosystem, Tableau focuses on the analytics catalog, giving people the information they need when and where they need it—directly in the flow of their analysis. Lake County Health Department is one such organization that has not only re-examined their data strategy as a result of data management, but they’ve also scaled how to better educate their consumers in a timely manner when having accurate data at your fingertips is more important than ever. We’ve seen both enterprise and commercial businesses use Tableau Data Management to help identify PII (personal identifiable information) data and remove it from their business processes. Internally, at Tableau, we’ve used Tableau Catalog to help us efficiently move from one database to another, while having a minimal impact on the enterprise at large. There’s plenty more to come for Tableau Data Management Tableau is dedicated to helping people take advantage of what it means to do self-service data management—and this means continuing to innovate so that our offering supports more use cases. With our 2020.3 release, we introduced the capability to write to external databases from Tableau Prep, which allows organizations to leverage their dollar and governance policy investments, while providing a single platform to go from data prep to analytics, seamlessly. And we’ve heard great feedback from customers that influenced updates we’ve already implemented. For example, we introduced high-visibility data quality warnings in 2020.2 when we heard that the default data quality warnings weren’t catching users’ attention. Tableau Prep’s write to database capability is now available with Tableau 2020.3. In upcoming releases, we have some even bigger product announcements related to Data Management—you aren’t going to want to miss them! Make sure to tune into the Tableau Conference-ish keynote on Tuesday, October 6th to hear about where we’re taking Data Management next. Register today—it’s completely free!
Read more
  • 0
  • 0
  • 567

Anonymous
17 Sep 2020
7 min read
Save for later

Seizing the moment: Enabling schools to manage COVID-19 using data-driven analysis from What's New

Anonymous
17 Sep 2020
7 min read
Eillie Anzilotti Public Affairs Specialist at Tableau Hannah Kuffner September 17, 2020 - 4:14pm September 17, 2020 Hard as it is to grapple with the many far-reaching impacts of COVID-19 on our culture, it's even harder to imagine how we would have coped with the same situation even a few decades ago. If the pandemic had hit in 2000 instead of 2020, where would we be? The virtual meetings we conduct for our daily work wouldn't be live on video with document sharing; at best, we'd be on conference calls and sending artifacts by fax. Our safety updates about the virus wouldn't arrive via social media; we'd mostly rely on word of mouth, the daily papers, and the evening news. Likewise, schools would have been in even graver danger of being left behind by COVID-19 than they are right now. The virus has made traditional day-to-day K-12 education all but impossible for the moment, and schools are working with the latest technology to serve their needs in the best ways possible. So what does that look like? How are schools meeting the need for solutions that address the breadth AND depth of the problem? How are they coordinating resources across disconnected, socioeconomically diverse student populations, and meeting the needs of all involved? This blog takes a look at two stakeholder groups: students and their families, and teachers and administrators. In each case, we'll see examples of how people are using data to overcome barriers imposed by the pandemic. Gathering data from students and families For most school districts, the first step in providing an alternate system of instruction was to assess what students wanted and needed in order to participate. They distributed surveys to households, both on paper and over the phone and web, to find out each student's readiness for online remote learning. Did they have a laptop or other connected device for attending classes? Did they need a Wi-fi hotspot in order to access the internet? Once online, could they successfully connect to the district's learning systems? In addition to technological readiness, surveys were also useful for tracking students' engagement. "We wanted to know how students were feeling about distance learning," said Hope Langston, director of assessment services for the Northfield Public School District in Minnesota. Survey responses indicated the level of difficulty students have adjusting to the changes, helping identify areas that needed the most urgent attention. Northfield continues to offer follow-up surveys that help measure their progress in addressing these issues over time. Equal Opportunity Schools (EOS), a Seattle-based nonprofit dedicated to improving access for students of color and low-income students, has conducted research throughout the pandemic that assesses remote learning by aggregating various factors of student sentiment, including teacher and principal evaluation, barriers to motivation, and "belonging" as it pertains to their identity, culture, and classroom experience. EOS uses Tableau to relate these factors to one another and gain insights into possible paths for achieving a more comprehensive learning experience for all students. Equal Opportunity Schools student experience dashboard (EOS) But school isn't entirely about teaching and learning. Districts have resources that help make sure students are healthy and safe, and deploying those resources during COVID-19 also requires a data-driven strategy. The El Paso Independent School District input survey data into Tableau visualizations and used them for planning nutritional and medical interventions where they were needed, including conducting telehealth sessions between school nurses and students who fell ill. If a household couldn't be reached for survey or classroom participation, truancy officers investigated to check on the wellbeing of students in their homes. As a district whose majority population is economically disadvantaged, and where one-third of students has limited English proficiency, these interventions were especially important for ensuring effective, equitable outreach. Student response dashboard (El Paso Independent School District) Similarly, Northfield used Tableau to visualize survey responses and other data related to their holistic pandemic response, with a heightened focus on achieving equity for underserved areas of the district's community, one-quarter of whom qualify for free and reduced lunch. Using a need-based "heat map" visualization as a daily tracker, Northfield set up food distribution centers in strategic locations and tracked the number of meals delivered per day at each site. Langston and her team also used the data to mobilize local volunteers to help families with language and socioeconomic challenges navigate connectivity. Meal distribution counts by location (Northfield Healthy Community Initiative) Throughout these efforts, the availability and visibility of survey responses and other data has been key to coordinating an effective response. "Our dashboards help us meet a need in our community, by getting us the information we need as clearly and as quickly as possible," said Langston. "We couldn't have done this if we didn't have an accurate picture of what the need is." Empowering teachers and administrators Teacher readiness—both practical and emotional—is another important factor in achieving effective remote learning operations, and many districts are using a similar survey-based approach to stay connected with educator sentiment. School administrators rely on this information to make sure teachers are getting the resources and support they need, as well as to work with teachers on tracking student engagement and taking action where needed to help their situation improve. In El Paso, Tableau dashboards track data from the district's remote learning platform and student information system to identify gaps in learning and take immediate action to address them. Viewing data at the district and school levels, and then filtering it down to specific populations or individual students, make it easy to quickly identify students at risk and report each case to the relevant principal or teacher. The visualizations are used to lead weekly meetings among school principals and assign school-specific tasks. El Paso Schoolology participation rates (El Paso Independent School District) "Having the dashboards, and being able to quickly export customized reports, meant we could readily engage school administrators," said Steve Clay, Executive Director of Analytics, Strategy, Assessment, and public education information management systems (PEIMS) for El Paso ISD. "As soon as we noticed problematic numbers, we could hand them a list of students and say: Here are the ones who aren't engaging—what's your plan, what haven't you tried yet, and how can we get you some help?" El Paso also used analytics and reporting to track compliance with new participation-based grading systems, which most teachers had never used before COVID-19. When teachers weren't trained or reminded to use the new system, the grades they reported in the remote learning platform didn't match the new guidelines, potentially confounding the ability to measure student progress. By visualizing the data and seeing the discrepancies, administrators could contact and coach the affected teachers and help bring the invalid grades to a point of fidelity with other measurements in the system. Preparing for the immediate and long-term future As schools continue to evolve their policies and practices, both during and after COVID-19, data will play a key role. Districts that lead the way with data-focused innovations are already finding success in adopting new policies that their states set forth. El Paso's approach to remote learning helped it comply with new standards for student engagement imposed by the Texas Education Authority (TEA) and could readily report its progress to TEA using the codes visible in Tableau. The better equipped a district is to tackle problems efficiently and at a granular level, the more capably they can face down unknown challenges in the future. To see example visualizations from El Paso ISD and other K-12 institutions using Tableau, visit the Tableau Use Cases in Education site.
Read more
  • 0
  • 0
  • 523
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-intel-introduces-cryogenic-control-chip-horse-ridge-for-commercially-viable-quantum-computing
Fatema Patrawala
11 Dec 2019
4 min read
Save for later

Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing

Fatema Patrawala
11 Dec 2019
4 min read
On Monday, Intel Labs introduced first of its kind cryogenic control chip codenamed Horse Ridge. According to Intel, Horse Ridge will enable commercially viable quantum computers and speed up development of full-stack quantum computing systems. Intel announced that Horse Ridge will enable control of multiple quantum bits (qubits) and set a clear path toward scaling larger systems. This seems to be a major milestone on the path to quantum practicality. As right now the challenge for quantum computing is that it only works at near-freezing temperatures. Intel is trying to change that with this control chip. As per Intel, Horse Ridge will be able to enable control at very low temperatures, as it will eliminate hundreds of wires going into a refrigerated case that houses the quantum computer. Horse Ridge is developed in partnership with Intel’s research collaborators at QuTech at Delft University of Technology. It is fabricated using Intel’s 22-nanometer FinFET manufacturing technology. The in-house fabrication of these control chips at Intel will dramatically accelerate the company’s ability to design, test, and optimize a commercially viable quantum computer, the company said. “A lot of research has gone into qubits, which can do simultaneous calculations. But Intel saw that controlling the qubits created another big challenge to developing large-scale commercial quantum systems,” states Jim Clarke, Director of quantum hardware, Intel in the official press release . “It’s pretty unique in the community, as we’re going to take all these racks of electronics you see in a university lab and miniaturize that with our 22-nanometer technology and put it inside of a fridge,” added Clarke. “And so we’re starting to control our qubits very locally without having a lot of complex wires for cooling.” The name “Horse Ridge” is inspired from one of the coldest regions in Oregon known as the Horse Ridge. It is designed to operate at cryogenic temperatures, approx 4 degrees Kelvin which is 7 degrees Fahrenheit and 4 degrees Celsius. What is the innovation behind Horse Ridge Quantum computers promise the potential to tackle problems that conventional computers can’t handle by themselves. Quantum computers leverage a phenomenon of quantum physics that allows qubits to exist in multiple states simultaneously. As a result, qubits can conduct a large number of calculations at the same time dramatically speeding up complex problem-solving. But Intel acknowledges the fact that the quantum research community still lags behind in demonstrating quantum practicality, a benchmark to determine if a quantum system can deliver game-changing performance to solve real-world problems. Till date, researchers have focused on building small-scale quantum systems to demonstrate the potential of quantum devices. In these efforts, researchers have relied upon existing electronic tools and high-performance computing rack-scale instruments to connect the quantum system to the traditional computational devices that regulates qubit performance and programs the system inside the cryogenic refrigerator. These devices are often custom designed to control individual qubits, requiring hundreds of connective wires in and out of the refrigerator. However, this extensive control cabling for each qubit hinders the ability to scale the quantum system to the hundreds or thousands of qubits required to demonstrate quantum practicality, not to mention the millions of qubits required for a commercially viable quantum solution. With Horse Ridge, Intel radically simplifies the control electronics required to operate a quantum system. Replacing these bulky instruments with a highly integrated system-on-chip (SoC) will simplify system design and allow for sophisticated signal processing techniques to accelerate set-up time, improve qubit performance, and enable the system to efficiently scale to larger qubit counts. “One option is to run the control electronics at room temperature and run coax cables down to configure the qubits. But you can immediately see that you’re going to run into a scaling problem because you get to hundreds or thousands of cables and it’s not going to work,” said Richard Uhlig, Managing Director Intel Labs. “What we’ve done with Horse Ridge is that it’s able to run at temperatures that are much closer to the qubits themselves. It runs at about 4 degrees Kelvin. The innovation is that we solved the challenges around getting CMOS to run at those temperatures and still have a lot of flexibility in how the qubits are controlled and configured.” To know more about this exciting news, check out the official announcement from Intel. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim The US to invest over $1B in quantum computing, President Trump signs a law Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 3864

article-image-netflix-open-sources-metaflow-its-python-framework-for-building-and-managing-data-science-projects
Fatema Patrawala
04 Dec 2019
5 min read
Save for later

Netflix open-sources Metaflow, its Python framework for building and managing data science projects

Fatema Patrawala
04 Dec 2019
5 min read
Yesterday, the Netflix team announced to open-source Metaflow, a Python library that helps scientists and engineers build and manage real-life data science projects. The Netflix team writes, “Over the past two years, Metaflow has been used internally at Netflix to build and manage hundreds of data-science projects from natural language processing to operations research.” Metaflow was developed by Netflix to boost productivity of data scientists who work on a wide variety of projects from classical statistics to deep learning. It provides a unified API to the infrastructure stack required to execute data science projects, from prototype to production. Metaflow integrates with Netflix's data science infrastructure stack Models are only a small part of an end-to-end data science project. Production-grade projects rely on a thick stack of infrastructure. At the minimum, projects need data and a way to perform computation on it. In a business environment like Netflix's typical data science project, the team touches upon all the layers of the stack depicted below: Source: Netflix website Data is accessed from a data warehouse, which can be a folder of files, a database, or a multi-petabyte data lake. The modeling code crunches the data executed in a compute environment and a job scheduler is used to orchestrate multiple units of work. Then the team architects the code to be executed by structuring it as an object hierarchy, Python modules, or packages. They version the code, input data, and produce ML models. After the model has been deployed to production, the team faces pertinent questions about model operations for example; How to keep the code running reliably in production? How to monitor its performance? How to deploy new versions of the code to run in parallel with the previous version? Additionally at the very top of the stack there are other questions like how to produce features for your models, or how to develop models in the first place using off-the-shelf libraries. In this Metaflow provides a unified approach to navigating the stack. Metaflow is more prescriptive about the lower levels of the stack but it is less opinionated about the actual data science at the top of the stack. Developers can use Metaflow with their favorite machine learning or data science libraries, such as PyTorch, Tensorflow, or  SciKit Learn. Metaflow allows you to write models and business logic as idiomatic Python code. Internally, Metaflow leverages existing infrastructure when feasible. The core value proposition of Metaflow is its integrated full-stack, human-centric API, rather than reinventing the stack itself. Metaflow on Amazon Web Services Metaflow is a cloud-native framework which it leverages elasticity of the cloud by design — both for compute and storage. Netflix is one of the largest users of Amazon Web Services (AWS) and have accumulated plenty of operational experience and expertise in dealing with the cloud. For this open-source release, Netflix partnered with AWS to provide a seamless integration between Metaflow and various AWS services. Metaflow comes with built-in capability to snapshot all code and data in Amazon S3 automatically, a key value proposition for the internal Metaflow setup. This provides data science teams with a comprehensive solution for versioning and experiment tracking without any user intervention, core of any production-grade machine learning infrastructure. In addition, Metaflow comes bundled with a high-performance S3 client, which can load data up to 10Gbps. Additionally Metaflow provides a first-class local development experience. It allows data scientists to develop and test code quickly on laptops, similar to any Python script. If the workflow supports parallelism, Metaflow takes advantage of all CPU cores available on the development machine. How is Metaflow different from existing Python frameworks On Hacker News, developers discuss how Metaflow is different than existing tools or workflows. One of them comments, “I don't like to criticise new frameworks / tools without first understanding them, but I like to know what some key differences are without the marketing/PR fluff before giving one a go. For instance, this tutorial example here does not look substantially different to what I could achieve just as easily in R, or other Python data wrangling frameworks. Is the main feature the fact I can quickly put my workflows into the cloud?” Someone from the Metaflow team responds on this thread, “Here are some key features: - Metaflow snapshots your code, data, and dependencies automatically in a content-addressed datastore, which is typically backed by S3, although local filesystem is supported too. This allows you to resume workflows, reproduce past results, and inspect anything about the workflow e.g. in a notebook. This is a core feature of Metaflow. - Metaflow is designed to work well with a cloud backend. We support AWS today but technically other clouds could be supported too. There's quite a bit of engineering that has gone into building this integration. For instance, using the Metaflow's built-in S3 client, you can pull over 10Gbps, which is more than you can get with e.g. aws CLI today easily. - We have spent time and effort in keeping the API surface area clean and highly usable. YMMV but it has been an appealing feature to many users this far.” Developers can find the project home page here and its code at GitHub. Netflix open sources Polynote, an IDE-like polyglot notebook with Scala support, Apache Spark integration, multi-language interoperability, and more Tesla Software Version 10.0 adds Smart Summon, in-car karaoke, Netflix, Hulu, and Spotify streaming Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels  
Read more
  • 0
  • 0
  • 5435

article-image-eu-antitrust-regulators-are-investigating-googles-data-collection-practices-reports-reuters
Sugandha Lahoti
03 Dec 2019
2 min read
Save for later

EU antitrust regulators are investigating Google's data collection practices, reports Reuters

Sugandha Lahoti
03 Dec 2019
2 min read
Google is facing another antitrust investigation from the European commission even after paying record fines last year due to its questionable data collection and advertising practices. According to a report by Reuters, EU antitrust regulators are investigating Google's data collection practices. “The Commission has sent out questionnaires as part of a preliminary investigation into Google’s practices relating to Google’s collection and use of data. The preliminary investigation is ongoing,” the EU regulator told Reuters in an email. Google said it uses data to better its services and that users can manage, delete and transfer their data at any time. The EU is looking into "how and why" the company collects data, specifically related to "local search services, login services, web browser, and others.”, told an executive to Reuters. Google has been previously hit by three antitrust fines by the EU, with a total antitrust bill amount of around $9.3 billion, to date. In March, the European Union fined Google 1.49 billion euros for antitrust violations in online advertising. Last year, the EU slapped Google with a $5 billion fine for the Android antitrust case. Google is also facing multiple scrutinies from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices Also, based on an investigation launched into YouTube by the Federal Trade Commission earlier this year, Google and YouTube have been fined a penalty of $170M to settle allegations that it broke federal law by collecting children’s personal information via YouTube Kids. The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple The US Justice Department opens a broad antitrust review case against tech giants EU Commission opens an antitrust case against Amazon on grounds of violating EU competition
Read more
  • 0
  • 0
  • 1884

article-image-google-will-not-support-cloud-print-its-cloud-based-printing-solution-starting-2021
Vincy Davis
25 Nov 2019
3 min read
Save for later

Google will not support Cloud Print, its cloud-based printing solution starting 2021

Vincy Davis
25 Nov 2019
3 min read
Last week, Google notified its users that Cloud Print, Google’s cloud-based printing solution will not receive any support after December 31, 2020. The Cloud Print service has been in beta since 2010. It is a technology that enables users to print data from any Cloud Print-aware application like web, desktop, mobile in the network cloud to any printer. Google also advised its users to migrate to an alternate native printing solution before the beginning of 2021. In the short support note, Google says that it has improved the native printing experience for Chrome OS users and will continue adding new features to it. “For environments besides Chrome OS, or in multi-OS scenarios, we encourage you to use the respective platform’s native printing infrastructure and/or partner with a print solutions provider,” adds Google. Native print management features currently or will be supported by Chrome OS by the end of 2019 Admin console interface will manage thousands of CUPS-based printers for users, devices, and managed guests by organizational unit. The admin console policy will manage user printing defaults for 2-sided (duplex) and color. Support for advanced printing attributes like stapling, paper trays, pin printing. The admin console policy will include user account and filename in IPP header of print job over a secure IPPS connection. This will enable third-party printing features such as secure printing and print-usage tracking. PIN code printing will also be managed by the admin console policy. It will allow users to enter pin code when sending the print job. It will also release the print job for printing the pin code into the printer keypad. Read More: Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide New print management features to be available for Chrome OS before 2021 New support for external CUPS print servers, including authentication. A Policy that will configure connections to external CUPS print servers. APIs for third-parties to access print job metadata, submit print jobs and printer management capabilities. Google has always been infamous for killing its own products. This year they have retired many products like the Trips app, Google Inbox, and Hire by Google to name a few. Read More: Why Google kills its own products Many users have expressed their disappointment with the retirement of Cloud Print. https://twitter.com/Filmtographer/status/1197684347526144000 https://twitter.com/Jamie00015/status/1197863088017608704 https://twitter.com/dietler/status/1197692376149413888 A user on Hacker News labeled Google to be the ‘land of the walking-dead projects’. The comment read, “Good news: they give you a year to transition. Bad news: you'll have to buy a new printer if it doesn't play nicely with CUPS. Google really is the land of walking-dead projects.” Google starts experimenting with Manifest V3 extension in Chrome 80 Canary build Google releases patches for two high-level security vulnerabilities in Chrome, one of which is still being exploited in the wild Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 2436
article-image-openai-releases-safety-gym-for-prioritizing-safety-exploration-in-reinforcement-learning
Vincy Davis
22 Nov 2019
6 min read
Save for later

OpenAI releases Safety Gym for prioritizing safety exploration in reinforcement learning

Vincy Davis
22 Nov 2019
6 min read
Reinforcement learning (RL) agents explore their environments to learn optimal policies by trial and error method. In such environments, one of the critical concerns is the safety of all the agents involved in the experiment. Though, currently, the reinforcement learning agents are mostly executed in simulation, there is a possibility that increased simulation complexities of the real world, will make the safety concerns paramount. To undertake safe exploration as a critical focus of the reinforcement learning research, a group of OpenAI researchers have proposed a new standardized constrained reinforcement learning (RL) method to incorporate safety specifications into reinforcement learning algorithms to achieve safe exploration. The major challenge of reinforcement learning is handling the trade-offs between competing objectives, such as task performance and satisfying safety requirements. However, in constrained reinforcement learning, “we don’t have to pick trade-offs—instead, we pick outcomes, and let algorithms figure out the trade-offs that get us the outcomes we want,” states OpenAI. Consequently, the researchers believe “constrained reinforcement learning may turn out to be more useful than normal reinforcement learning for ensuring that agents satisfy safety requirements.” Read More: OpenAI’s AI robot hand learns to solve a Rubik RLusing Reinforcement learning and Automatic Domain Randomization (ADR) The field of reinforcement learning has greatly progressed in recent years, however, different implementations use different environments and evaluation procedures. Hence, the researchers believe that there is a deficiency of a standard set of environments for making progress on safe exploration specifically. To this end, the researchers present Safety Gym, a suite of tools for accelerating safe exploration research. Safety Gym is a benchmark suite of 18 high-dimensional continuous control environments for safe exploration, 9 additional environments for debugging task performance separately from safety requirements, and tools for building additional environments. https://twitter.com/OpenAI/status/1197559989704937473 How does a Safety Gym prioritize safety exploration in reinforcement learning Safety Gym consists of two components, out of which first is an environment-builder that allows a user to create a new environment by mixing and matching from a wide range of physics elements, goals, and safety requirements. The other component of Safety Gym is a suite of pre-configured benchmark environment to standardize the measure of progress on the safe exploration problem. It is implemented as a standalone module that uses the OpenAI Gym interface for instantiating and interacting with reinforcement learning environments. It also uses the MuJoCo physics simulator to construct and forward-simulate each environment. In line with the proposal of standardizing constrained reinforcement learning, each Safety Gym environment provides a separate objective for task performance and safety. These objectives are conveyed via a reward function and a set of auxiliary cost functions respectively. Key features of Safety Gym Since there exists a gradient of difficulty across benchmark environments, it allows practitioners to quickly perform the simplest tasks before proceeding to the hardest ones. Each distribution layer of the Safety Gym benchmark environments is continuous and minimally restricted, thus allowing essentially infinite variations within each environment. It is highly extensible. The Safety Gym tools enables easy building of new environments with different layout distributions. In all Safety Gym environments, an agent perceives the surrounding through a robot’s sensors and interacts with the world through its actuators. It is shipped with three pre-made robots. Three pre-made robots included in the Safety Gym suite Point is a simple robot that is limited to the 2D plane. It uses one actuator for turning and another for moving forward or backward. It has a front-facing small square which helps it with the Push task. Car has two independently-driven parallel wheels and a free-rolling rear wheel. For this robot, turning and moving forward or backward require coordinating both of the actuators. Doggo is a quadrupedal robot with bilateral symmetry.  Each of the four legs has two controls at the hip, and one in the knee which controls the angle. It is designed such that a uniform random policy should keep the robot from falling over and generate some travel.  Image source: Research paper These three environment-builders currently support three main tasks of Goal, Button and Push. All the tasks in Safety Gym are mutually exclusive and can work on only one task at a time. It supports five main kinds of elements relevant to safety requirements like Hazards (dangerous areas to avoid), Vases (Objects to avoid), Pillars (Immobile obstacles), Buttons (Incorrect goals), and Gremlins (Moving objects). All the types of constraint elements pose different challenges for the agent to avoid. General trends observed during the experiment After conducting experiments on the unconstrained and constrained reinforcement learning algorithms on the constrained Safety Gym environments, the researchers found that the unconstrained reinforcement learning algorithms are able to score high returns by taking unsafe actions, as measured by the cost function. On the other hand, the constrained reinforcement learning algorithms attain lower levels of return, and correspondingly maintain desired levels of costs. Also, they found that the standard reinforcement learning is able to control the Doggo robot and can acquire complex locomotion behavior, as indicated by high returns in the environments when trained without constraints. However,despite the success of constrained reinforcement learning when locomotion requirements are absent, and the success of standard reinforcement learning when locomotion is needed, the constrained reinforcement learning algorithms struggled to learn the safe locomotion policies. The researchers also state that additional research is needed to develop constrained reinforcement learning algorithms that can solve more challenging tasks. Thus the OpenAI researchers propose a standardized constrained reinforcement learning as the main formalism for safe exploration. They also introduce Safety Gym which is the first benchmark of high-dimensional continuous control environments for evaluating the performance of constrained reinforcement learning algorithms. The researchers have also evaluated baseline unconstrained and constrained reinforcement learning algorithms on Safety Gym environments to clarify the current state of the art in safe exploration. Many have appreciated Safety Gym’s feature of prioritizing ‘safety’ first in AI. https://twitter.com/gicorit/status/1197594242715131904 https://twitter.com/tupjarsakiv/status/1197597397918126085 Interested reader can read the research paper for more information on Safety Gym. Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment What does a data science team look like? NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform LG introduces Auptimizer, an open-source ML model optimization tool for efficient hyperparameter tuning at scale
Read more
  • 0
  • 0
  • 3062

article-image-nvidia-releases-kaolin-a-pytorch-library-to-accelerate-research-in-3d-computer-vision-and-ai
Vincy Davis
19 Nov 2019
4 min read
Save for later

NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI 

Vincy Davis
19 Nov 2019
4 min read
Deep learning and 3D vision research have led to major developments in the field of robotics and computer graphics. However, there is a dearth of systems that allow easy loading of popular 3D datasets and get the 3D data across various representations converted into modern machine learning frameworks. To overcome this barrier, researchers at NVIDIA have developed a 3D deep learning library for PyTorch called ‘Kaolin’. Last week, the researchers published the details of Kaolin in a paper titled “Kaolin: A PyTorch Library for Accelerating 3D Deep Learning Research”. https://twitter.com/NvidiaAI/status/1194680942536736768 Kaolin provides an efficient implementation of all core modules that are required to build 3D deep learning applications. According to NVIDIA, Kaolin can slash the job of preparing a 3D model for deep learning from 300 lines of code down to just five. Key features offered by Kaolin It supports all popular 3D representations like Polygon meshes, Pointclouds, Voxel grid, Signed distance functions, and Depth images. It enables complex 3D datasets to be loaded into machine-learning frameworks, irrespective of how they’re represented or will be rendered. It can be implemented in diverse fields for instance robotics, self-driving cars, medical imaging, and virtual reality. Kaolin has a suite of 3D geometric functions that allow manipulation of 3D content. Several rigid body transformations can be implemented in a variety of parameterizations like Euler angles, Lie groups, and Quaternions. It also permits differentiable image warping layers and also allows for 3D-2D projection, and 2D-3D back projection. Kaolin reduces the large overhead involved in file handling, parsing, and augmentation into a single function call and renders support to many 3D datasets like ShapeNet and PartNet. The access to all data is provided via extensions to the PyTorch Dataset and DataLoader classes which makes pre-processing and loading 3D data simple and intuitive. Kaolin’s modular differentiable renderer A differentiable renderer is a process that supplies pixels as a function of model parameters to simulate a physical imaging system. It also supplies derivatives of the pixel values with respect to those parameters. With an aim to allow users the easy use of popular differentiable rendering methods, Kaolin provides a flexible and modular differentiable renderer. It defines an abstract base class called ‘DifferentiableRenderer’ which contains abstract methods for each component in a rendering pipeline. The abstract methods allowed in Kaolin include geometric transformations, lighting, shading, rasterization, and projection. It also supports multiple lighting, shading, projection, and rasterization modes. One of the important aspects of any computer vision task is visualizing data. Kaolin delivers visualization support for all of computer vision representation types. It is implemented via lightweight visualization libraries such as Trimesh, and pptk for running time visualization. The researchers say, “While we view Kaolin as a major step in accelerating 3D DL research, the efforts do not stop here. We intend to foster a strong open-source community around Kaolin, and welcome contributions from other 3D deep learning researchers and practitioners.” The researchers are hopeful that the 3D community will try out Kaolin, and contribute to its development. Many developers have expressed interest in the Kaolin PyTorch Library. https://twitter.com/RanaHanocka/status/1194763643700858880 https://twitter.com/AndrewMendez19/status/1194719320951197697 Read the research paper for more details about Kaolin’s roadmap. You can also check out NVIDIA’s official announcement. Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries
Read more
  • 0
  • 0
  • 5524

article-image-baidu-adds-paddle-lite-2-0-new-development-kits-easydl-pro-and-other-upgrades-to-its-paddlepaddle-deep-learning-platform
Vincy Davis
15 Nov 2019
3 min read
Save for later

Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform

Vincy Davis
15 Nov 2019
3 min read
Yesterday, Baidu’s deep learning open platform PaddlePaddle (PArallel Distributed Deep LEarning), released its latest version with 21 new products such as Paddle Lite 2.0, four end-to-end development kits including ERNIE for semantic understanding (NLP), toolkits and other new upgrades. PaddlePaddle is an easy-to-use, flexible and scalable deep learning platform developed for applying deep learning to many products at Baidu. Paddle Lite 2.0 The main goal of Paddle Lite is to maintain low latency and high-efficiency of AI applications when they are running on resource-constrained devices. Launched last year, Paddle Lite is customized for inference on mobile, embedded, and IoT devices. It is also compatible with PaddlePaddle and other pre-trained models. With enhanced usability in Paddle Lite 2.0, developers can deploy ResNet-50 with seven lines of code. The new version has added support for more hardware units such as edge-based FPGA and also permits low-precision inference using operators with the INT8 data type. New development kits Development kits aim to continuously reduce the development threshold for low-cost and rapid model constructions. ERNIE for semantic understanding (NLP): ERNIE (Enhanced Representation through kNowledge IntEgration) is a continual pre-training framework for semantic understanding. Earlier this year in July, Baidu had open sourced ERNIE 2.0 model and revealed that ERNIE 2.0 outperformed BERT and XLNet in 16 NLP tasks, including English tasks on GLUE benchmarks and several Chinese tasks. PaddleDetection: It has more than 60 easy-to-use object detection models. PaddleSeg for computer vision (CV): It is an end-to-end image segmentation library that supports data augmentation, modular design, and end-to-end deployment. Elastic CTR for recommendation: Elastic CTR is a newly released solution that provides process documentation for distributed training on Kubernetes (k8s) clusters. It also provides the distributed parameter deployment forecasts as a one-click solution. EasyDL Pro EasyDL is an AI platform for novice developers to train and build custom models via a drag-and-drop interface. EasyDL Pro is a one-stop AI development platform for algorithm engineers to deploy AI models with fewer lines of code. Master mode The Master mode will help developers customize models for specific tasks. It has a large library of pre-trained models and tools for transfer learning. Other new upgrades New toolkits like graph, federated and multi-task learning. API’s upgraded for flexibility, usability, and improved documentation. A new PaddlePaddle module for model compression called PaddleSlim is added to enable a quantitative training function and a hardware-based small model search capability. Paddle2ONNX and X2Paddle are upgraded for improved conversion of trained models from PaddlePaddle to other frameworks. Head over to Baidu’s blog for more details. Baidu open sources ‘OpenEdge’ to create a ‘lightweight, secure, reliable and scalable edge computing community’ Unity and Baidu collaborate for simulating the development of autonomous vehicles CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform
Read more
  • 0
  • 0
  • 3081
article-image-lg-introduces-auptimizer-an-open-source-ml-model-optimization-tool-for-efficient-hyperparameter-tuning-at-scale
Bhagyashree R
12 Nov 2019
4 min read
Save for later

LG introduces Auptimizer, an open-source ML model optimization tool for efficient hyperparameter tuning at scale

Bhagyashree R
12 Nov 2019
4 min read
Last week, researchers from LG’s Advanced AI team open-sourced Auptimizer. It is a general hyperparameter optimization (HPO) framework to help data scientists speed up machine learning model tuning. What challenges Auptimizer aims to address Hyperparameters are adjustable parameters that govern the training process of a machine learning model. These represent important properties of a model, for instance, a penalty in Logistic Regression Classifier or learning rate for training a neural network. Tuning hyperparameters can often be a very tedious task, especially when the model training is computationally intensive. There are currently both open source and commercial automated HPO solutions like Google AutoML, Amazon SageMaker, and Optunity. However, using them at scale still poses some challenges. In a paper explaining the motivation and system design behind Auptimizer, the team wrote, “But, in all cases, adopting new algorithms or accommodating new computing resources is still challenging.” To address these challenges and more, the team has come up with Auptimizer with which they aim to automate all the tedious tasks you do when building a machine learning model. This initial open-sourced version provides the following advantages: Easily switch among different Hyperparameter Optimization algorithms without rewriting the training script Getting started with Auptimizer only requires adding a few lines of code and it will then guide you to setup all other experiment-related configurations. This enables users to switch among different HPO algorithms and computing resources without rewriting their training script, which is one of the key hurdles in the HPO adoption process. Once set up, it will run and record sophisticated Hyperparameter Optimization experiments for you. Orchestrating compute resources for faster hyperparameter tuning Users can specify the resources they want to be used in experiment configurations including processors, graphics chips, nodes, and public cloud instances like Amazon Web Services EC2. Auptimizer keeps track of the resources in a persistent database and queries it to check if the resources specified by the user are available. If the resource is available, it will be taken by Auptimizer for job execution, and if not the system will wait until it is free. Auptimizer is also compatible with existing resource management tools such as Boto 3. A single interface to various sophisticated HPO algorithms The current Auptimizer implementation provides a “single seamless access point to top-notch HPO algorithms” such as Spearmint, Hyperopt, Hyperband, BOHB and also supports the simple random search and grid search. Users can also integrate their own proprietary solution and switch between different HPO algorithms with minimal changes to their existing code. The following table shows the currently supported techniques by Auptimizer: Source: LG How Auptimizer works The following figure shows the system design for Auptimizer: [caption id="attachment_30616" align="aligncenter" width="699"] Auptimizer System Design[/caption] Source: LG The key components of Auptimizer are the Proposer and Resource Manager. The Proposer interface defines two functions: ‘get_param()’ to return the new hyperparameter values and ‘update()’ to update the history. Resource Manager is responsible for automatically connecting compute resources to model training when they are available. Its ‘get_available()’ function acts as the interface between Auptimizer and typical resource management and job scheduling tools. The ‘run()’ function, as the name suggests, executes the provided code. To enable reproducibility, Auptimizer has a provision for tracking all the experiment history in the user-specified database. Users can also visualize the results from history with a basic visualization tool that comes integrated with Auptimizer. For further analysis, users can directly access the results stored in the database. Sharing the future vision for Auptimizer the team wrote, “As development progress, Auptimizer will support the end-to-end development cycle for building models for edge devices including robust support for model compression and neural architecture search.” This article gave you a basic introduction to Auptimizer. Check out the paper, Auptimizer - an Extensible, Open-Source Framework for Hyperparameter Tuning and GitHub repository to know more in detail. Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Training Deep Convolutional GANs to generate Anime Characters [Tutorial]
Read more
  • 0
  • 0
  • 2661

article-image-neo4j-introduces-aura-a-new-cloud-service-to-supply-a-flexible-reliable-and-developer-friendly-graph-database
Vincy Davis
07 Nov 2019
2 min read
Save for later

Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database

Vincy Davis
07 Nov 2019
2 min read
Last year, Neo4j had announced the availability of its Enterprise Edition under a commercial license that was aimed at larger companies. Yesterday, the graph database management firm introduced a new managed cloud service called Aura directed at smaller companies. This new service is developed for the market audience between the larger companies and Neo4j’s open source product. https://twitter.com/kfreytag/status/1192076546070253568 Aura aims to supply a flexible, reliable and developer-friendly graph database. In an interview with TechCrunch, Emil Eifrem, CEO and co-founder at Neo4j says, “To get started with, an enterprise project can run hundreds of thousands of dollars per year. Whereas with Aura, you can get started for about 50 bucks a month, and that means that it opens it up to new segments of the market.” Aura offers a definite value proposition, a flexible pricing model, and other management and security updates for the company. It will also provide scaling of the growing data requirements of the company. In simple words, Aura seeks to simplify developers’ work by allowing them to focus on building applications work while Neo4j takes care of the company’s database. Many developers are excited to try out Aura. https://twitter.com/eszterbsz/status/1192359850375884805 https://twitter.com/IntriguingNW/status/1192352241853849600 https://twitter.com/sixwing/status/1192090394244333569 Neo4j rewarded with $80M Series E, plans to expand company Neo4j 3.4 aims to make connected data even more accessible Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell Linux Foundation introduces strict telemetry data collection and usage policy for all its projects MongoDB is partnering with Alibaba
Read more
  • 0
  • 0
  • 3311