Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Artificial Intelligence

83 Articles
article-image-how-to-integrate-ai-into-software-development-teams
Anderson Soares Furtado Oliveira
21 Nov 2024
15 min read
Save for later

How to Integrate AI into Software Development Teams

Anderson Soares Furtado Oliveira
21 Nov 2024
15 min read
This article is an excerpt from the book, "​AI Strategies for Web Development", by Anderson Soares Furtado Oliveira. Embark on an enlightening AI journey by understanding its role and its fundamentals, crafting cutting-edge applications, and navigating ethical challenges. You’ll also explore strategic tools and gain foresight into future trends.IntroductionIntegrating AI into software development teams is no longer a futuristic concept; it is a strategic necessity in today's digital era. AI has the potential to revolutionize software development by optimizing processes, solving complex problems, improving user experience, and driving business value. However, harnessing the power of AI requires more than just adopting new tools—it demands a shift in mindset, processes, skills, and team culture. In this article, we explore actionable strategies for software engineering leaders to successfully integrate AI into their teams, drawing from Gartner’s recommendations and industry best practices. From fostering collaboration and upskilling teams to implementing data pipelines and AI solutions, these steps will help organizations fully leverage AI's transformative potential.How to integrate AI into software development teamsAI is a technology that can transform the way we create and use software applications. It can help us solve complex problems, optimize processes, improve UX, and generate value for businesses. However, for us to fully leverage the potential of AI, it needs to be effectively integrated into software development teams. In this section, we will present some actions that software engineering leaders should consider so that they can achieve this goal, based on Gartner’s recommendations (https://www.gartner. com/en/articles/set-up-now-for-ai-to-augment-software-development).Let’s start:Adopt an AI mindset from the start: The first action is to adopt an AI mindset from the start of the project, encouraging the exploration of AI techniques to improve application development. This means that developers should be open to learning about the possibilities and challenges of AI and seek innovative solutions that use this technology. In addition, leaders should set clear and measurable goals for the use of AI and align expectations with project stakeholders. So, encourage teams to explore AI by initiating projects that directly involve AI technologies. For instance, a development team could be tasked with creating a chatbot to streamline customer service interactions, encouraging them to learn and apply NLP techniques.Provide a framework to identify AI opportunities: The second action is to provide a framework to identify when and where AI can yield better results. This involves analyzing the needs and requirements of the project, and assessing whether AI can offer benefits in terms of quality, efficiency, scalability, security, or other aspects. It is also important to consider the costs and risks associated with implementing AI and compare them with available alternatives. The framework should guide developers in choosing the most suitable AI techniques for each case, such as ML, NLP, and computer vision. Develop a decision matrix to help identify opportunities for AI integration that can enhance project outcomes. This matrix could evaluate factors such as potential improvements in efficiency and quality against the costs and complexity of implementing AI solutions, helping to pinpoint where tools such as ML could be most beneficial.Invest in dedicated AI solutions: The third action is to invest in dedicated AI solutions to support various roles and tasks in software engineering. These solutions can be tools, platforms, services, or libraries that use AI to facilitate or automate activities such as design, coding, testing, debugging, integration, deployment, and monitoring. These solutions can increase the productivity, quality, and creativity of developers, as well as reduce errors and rework. Some examples of AI solutions for software engineering are intelligent assistants, code generators, code analyzers, and automatic testers. For example, implementing platforms such as TensorFlow or PyTorch for ML projects can aid in tasks ranging from predictive analytics to automated testing, thus boosting productivity and reducing the likelihood of errors.Expand the data engineering pipeline: The fourth action is to expand the data engineering pipeline to leverage AI enrichment and enable intelligent applications. Th is means that developers should collect, store, process, analyze, and visualize data efficiently and securely, using AI to extract insights and value from data. In addition, developers should integrate the data with AI models, and use these models to provide intelligent features to applications, such as recommendations, customizations, predictions, and detections. Intelligent applications can improve performance, usability, and end-user satisfaction. By integrating comprehensive data management tools such as Apache Kafka for real-time data streaming and processing, teams can enhance their applications with features such as real-time analytics and dynamic UX customization.Foster collaboration between development and model-building teams: The fifth action is to foster collaboration between development teams and model-building teams to avoid overlapping responsibilities and ensure smooth deployment. This involves creating a culture of collaboration and communication, where both teams understand their roles and responsibilities, and work together to implement AI solutions. This can help avoid conflicts, reduce delays, and ensure that the AI models are correctly integrated into the soft ware applications. Establish regular sync-up meetings between software developers and AI model builders to ensure alignment and seamless integration of AI capabilities into applications. These meetings can help clarify responsibilities, share insights, and quicken the pace of development.Continuously train and upskill the team: The sixth action is to continuously train and upskill the team in AI technologies. This involves providing regular training sessions, workshops, and resources to help developers learn about the latest AI techniques and tools. It also involves creating a learning culture, where developers are encouraged to learn and share their knowledge with others. This can help to build a team of skilled AI practitioners, who can effectively use AI to improve software development. Create ongoing educational programs and provide access to courses from platforms such as Coursera or Udemy that cover advanced AI topics. Encouraging participation in hackathons or internal projects focused on AI can also foster practical experience and innovation.Effectively integrating AI into software development teams is a complex task that requires a strategic and diligent approach. It’s not just about adopting new tools or technologies but transforming the mindset, processes, skills, and culture of the team. To navigate this transformation successfully, a structured checklist can serve as a valuable guide, ensuring that every critical aspect is addressed systematically:1. Assessment and planning:Identify objectives: Define clear objectives for integrating AI into your development processes. Determine what problems you aim to solve or what improvements you want to achieve.Evaluate readiness: Assess your team’s current capabilities, infrastructure, and tools to determine readiness for AI integration. Stakeholder alignment: Ensure all stakeholders understand the benefits and implications of AI integration. Secure their support and alignment with the project goals.2. Data collection and management:   Identify data sources: Determine the types of data that will be valuable for AI-driven insights (e.g., source code data, user interaction data, performance data).   Set up data pipelines: Implement data pipelines using tools such as Apache Kafka for real-time data collection and streaming.   Ensure data quality: Establish processes for data cleaning, normalization, and validation to maintain high data quality.3. Infrastructure and tools:Select AI tools: Choose appropriate AI-powered tools for different stages of the development process, such as GitHub Copilot for code generation, Testim for automated testing, and Dynatrace for performance monitoring.Scalable storage solutions: Implement scalable storage solutions such as Amazon S3 or Google Cloud Storage to handle large volumes of data.Processing frameworks: Utilize data processing frameworks such as Apache Spark or Flink for efficient data processing.4. Model development and integration:Build AI models: Use ML frameworks such as TensorFlow, PyTorch, and scikit-learn to develop AI models that can analyze data and generate insights.Integrate AI models: Integrate AI models into your development environment to provide intelligent features such as code suggestions, anomaly detection, and predictive analytics.5. Testing and validation:Automated testing tools: Implement AI-powered automated testing tools such as Testim to create and maintain test cases, ensuring the software remains robust and error-free.Continuous integration: Set up continuous integration (CI) pipelines to automatically run tests and validate code changes.Performance monitoring: Use tools such as New Relic AI and Dynatrace to monitor application performance and detect issues in real-time.6. Security and compliance:Vulnerability scanning: Use AI-powered security tools such as Snyk and Veracode to identify and fix vulnerabilities in the code. Compliance checks: Ensure that AI models and data processing adhere to relevant regulations and standards, such as General Data Protection Regulation (GDPR).7. Deployment and maintenance:Automated deployment: Set up automated deployment pipelines to streamline the release process.Real-time monitoring: Continuously monitor the application in production using tools such as Amazon CloudWatch and Splunk for anomaly detection.Feedback loop: Establish a feedback loop to collect user feedback and performance data, using this information to continuously improve the AI models and development processes.By following these actions, software engineering leaders can effectively integrate AI into their teams and leverage its potential to create innovative, high-quality, and intelligent software applications. This can lead to significant improvements in productivity, quality, creativity, and user satisfaction, as well as provide a competitive edge in today’s increasingly digital and data-driven market.However, it’s important to remember that AI is just a tool that can help solve problems and generate value. The ultimate success of the project depends on the team’s ability to understand user needs, create effective and innovative solutions, and deliver high-quality software. Therefore, AI should be integrated in a way that supports and enhances these goals, rather than replacing them.ConclusionIntegrating AI into software development teams is a multifaceted process that goes beyond adopting cutting-edge tools. It involves fostering a culture of collaboration, continuous learning, and innovation, as well as ensuring robust data management, security, and compliance frameworks. By following a structured approach—starting with clear objectives and readiness assessments, implementing advanced tools and frameworks, and maintaining continuous validation and feedback loops—software engineering leaders can unlock AI's full potential. This integration will not only enhance productivity and quality but also empower teams to create intelligent, high-performing applications that meet user needs and provide a competitive edge. Ultimately, AI should be a powerful enabler, complementing human creativity and expertise to deliver software solutions that truly excel.Author BioAnderson Soares Furtado Oliveira is an experienced executive, AI strategist, and machine learning engineer specializing in AI governance, risk management, and compliance. As a board member at The Global Center for Risk and Innovation (GCRI) and an AI strategy consultant at G³ AI Global, he co-authored the book PgM Canvas: Transforming Vision into Real Benefits - A Program Management Guide for Leaders and Managers. With over a decade of experience in IT governance (CGEIT) and a focus on integrating AI technologies to drive business growth, he has led numerous AI projects and developed AI governance frameworks. His expertise in digital transformation and national development has equipped him to create innovative solutions and ethical AI applications. Anderson is a PhD student in Computer Science and Computational Mathematics at the University of São Paulo and holds an MBA in Software Engineering Project Management.
Read more
  • 0
  • 0
  • 1087

article-image-3-different-types-of-generative-adversarial-networks-gans-and-how-they-work
Packt Editorial Staff
08 Jan 2020
6 min read
Save for later

3 different types of generative adversarial networks (GANs) and how they work

Packt Editorial Staff
08 Jan 2020
6 min read
Generative adversarial networks (GANs) have been greeted with real excitement since their creation back in 2014 by Ian Goodfellow and his research team. Yann LeCun, Facebook's Director of AI Research went as far as describing GANs as "the most interesting idea in the last 10 years in ML." With all this excitement, however, it can be easy to miss the subtle diversity of GANs; there are a number of different types of generative adversarial networks, each one working in slightly different ways and helping engineers to achieve slightly different results. To give you a deeper insight on GANs, in this article we'll look at three different generative adversarial networks: SRGANs, CycleGANs, and InfoGANs. We'll explore how these different GANs work and how they can be used. This should give you a solid foundation to explore GANs in more depth and begin to apply them in your own experiments and projects. This article is an excerpt from the book, Deep Learning with TensorFlow 2 and Keras, Second Edition by Antonio Gulli, Amita Kapoor, and Sujit Pal.  SRGAN - Super Resolution GANs Remember seeing a crime-thriller where our hero asks the computer guy to magnify the faded image of the crime scene? With the zoom we are able to see the criminal’s face in detail, including the weapon used and anything engraved upon it! Well, SRGAN can perform similar magic. Here a GAN is trained in such a way that it can generate a photorealistic high-resolution image when given a low-resolution image. The SRGAN architecture consists of three neural networks: a very deep generator network, a discriminator network, and a pretrained VGG-16 network. How do SRGANs work? SRGANs use the perceptual loss function (developed by Johnson et al, Perceptual Losses for Real-Time Style Transfer and Super-Resolution). The difference in the feature map activations in high layers of a VGG network between the network output part and the high-resolution part comprises the perceptual loss function. Besides perceptual loss, the authors further added content loss and an adversarial loss so that images generated look more natural and the finer details more artistic. The perceptual loss is defined as the weighted sum of content loss and adversarial loss: lSR = lSR X+ 10−3×lSRGen The first term on the right-hand side is the content loss, obtained using the feature maps generated by pretrained VGG 19. Mathematically it is the Euclidean distance between the feature map of the reconstructed image (that is the one generated by the generator) and the original high-resolution reference image. The second term on the right-hand side is the adversarial loss. It is the standard generative loss term, designed to ensure that images generated by the generator are able to fool the discriminator. You can see in the following figure taken from the original paper that the image generated by SRGAN is much closer to the original high-resolution image: [caption id="attachment_31006" align="aligncenter" width="907"] image via https://arxiv.org/pdf/1609.04802.pdf[/caption] CycleGAN Another noteworthy architecture is CycleGAN; proposed in 2017, it can perform the task of image translation. Once trained you can translate an image from one domain to another domain. For example, when trained on horse and zebra data set, if you give it an image with horses in the ground, the CycleGAN can convert the horses to zebra with the same background. How does CycleGAN work? Have you ever imagined how a scenery would look if Van Gogh or Manet had painted it? We have many sceneries, and many landscapes painted by Gogh/Manet, but we do not have any collection of input-output pairs. CycleGAN performs the image translation, that is, transfers an image given in one domain (scenery for example) to another domain (Van Gogh painting of the same scene, for instance) in the absence of training examples. CycleGAN’s ability to perform image translation in the absence of training pairs is what makes it unique. To achieve image translation the authors of CycleGAN used a very simple and yet effective procedure. They made use of two GANs, the generator of each GAN performing the image translation from one domain to another. To elaborate, let us say the input is X, then the generator of the first GAN performs a mapping G: X → Y, thus its output would be Y = G(X). The generator of the second GAN performs an inverse mapping F: Y → X, resulting in X = F(Y). Each discriminator is trained to distinguish between real images and synthesized images. The idea is shown as follows: To train the combined GANs, the authors added beside the conventional GAN adversarial loss a forward cycle consistency loss (left figure) and a backward cycle consistency loss (right figure). This ensures that if an image X is given as input, then after the two translations F(G(X)) ~ X the obtained image is the same X (similarly the backward cycle consistency loss ensures the G(F(Y)) ~ Y). Following are some of the successful image translations by CycleGAN: Following are few more examples, you can see the translation of seasons (summer → winter), photo → painting and vice versa, horses → zebra: InfoGAN The GAN architectures that we have considered up to now provide us with little or no control over the generated images. InfoGAN changes this; it provides control over various attributes of the images generated. The InfoGAN uses concepts from information theory such that the noise term is transformed into latent codes which provide predictable and systematic control over the output. How does InfoGAN work? The generator in InfoGAN takes two inputs the latent space Z and a latent code c, thus the output of generator is G(Z,c). The GAN is trained such that it maximizes the mutual information between the latent code c and the generated image G(Z,c). The following figure shows the architecture of InfoGAN:   The concatenated vector (Z,c) is fed to the Generator. Q(c|X) is also a neural network, combined with the generator it works to form a mapping between random noise Z and its latent code c_hat, it aims to estimate c given X. This is achieved by adding a regularization term to the objective function of conventional GAN: minDmaxG VI(D,G) = VG(D,G) −λI(c;G(Z,c)) The term VG(D,G) is the loss function of conventional GAN, and the second term is the regularization term, where λ is a constant. Its value was set to 1 in the paper, and I(c;G(Z,c)) is the mutual information between the latent code c and the Generator generated image G(Z,c). Below is the results of InfoGAN on the MNIST dataset: That concludes our brief look at three different types of generative adversarial networks. You can find the book from which this article was taken on the Packt store or you can read the first chapter for free on the Packt subscription platform.
Read more
  • 0
  • 0
  • 16973

article-image-emmanuel-tsukerman-on-why-a-malware-solution-must-include-a-machine-learning-component
Savia Lobo
30 Dec 2019
11 min read
Save for later

Emmanuel Tsukerman on why a malware solution must include a machine learning component

Savia Lobo
30 Dec 2019
11 min read
Machine learning is indeed the tech of present times! Security, which is a growing concern for many organizations today and machine learning is one of the solutions to deal with it. ML can help cybersecurity systems analyze patterns and learn from them to help prevent similar attacks and respond to changing behavior. To know more about machine learning and its application in Cybersecurity, we had a chat with Emmanuel Tsukerman, a Cybersecurity Data Scientist and the author of Machine Learning for Cybersecurity Cookbook. The book also includes modern AI to create powerful cybersecurity solutions for malware, pentesting, social engineering, data privacy, and intrusion detection. In 2017, Tsukerman's anti-ransomware product was listed in the Top 10 ransomware products of 2018 by PC Magazine. In his interview, Emmanuel talked about how ML algorithms help in solving problems related to cybersecurity, and also gave a brief tour through a few chapters of his book. He also touched upon the rise of deepfakes and malware classifiers. On using machine learning for cybersecurity Using Machine learning in Cybersecurity scenarios will enable systems to identify different types of attacks across security layers and also help to take a correct POA. Can you share some examples of the successful use of ML for cybersecurity you have seen recently? A recent and interesting development in cybersecurity is that the bad guys have started to catch up with technology; in particular, they have started utilizing Deepfake tech to commit crime; for example,they have used AI to imitate the voice of a CEO in order to defraud a company of $243,000. On the other hand, the use of ML in malware classifiers is rapidly becoming an industry standard, due to the incredible number of never-before-seen samples (over 15,000,000) that are generated each year. On staying updated with developments in technology to defend against attacks Machine learning technology is not only used by ethical humans, but also by Cybercriminals who use ML for ML-based intrusions. How can organizations counter such scenarios and ensure the safety of confidential organizational/personal data? The main tools that organizations have at their disposal to defend against attacks are to stay current and to pentest. Staying current, of course, requires getting educated on the latest developments in technology and its applications. For example, it’s important to know that hackers can now use AI-based voice imitation to impersonate anyone they would like. This knowledge should be propagated in the organization so that individuals aren’t caught off-guard. The other way to improve one’s security is by performing regular pen tests using the latest attack methodology; be it by attempting to avoid the organization’s antivirus, sending phishing communications, or attempting to infiltrate the network. In all cases, it is important to utilize the most dangerous techniques, which are often ML-based On how ML algorithms and GANs help in solving cybersecurity problems In your book, you have mentioned various algorithms such as clustering, gradient boosting, random forests, and XGBoost. How do these algorithms help in solving problems related to cybersecurity? Unless a machine learning model is limited in some way (e.g., in computation, in time or in training data), there are 5 types of algorithms that have historically performed best: neural networks, tree-based methods, clustering, anomaly detection and reinforcement learning (RL). These are not necessarily disjoint, as one can, for example, perform anomaly detection via neural networks. Nonetheless, to keep it simple, let’s stick to these 5 classes. Neural networks shine with large amounts of data on visual, auditory or textual problems. For that reason, they are used in Deepfakes and their detection, lie detection and speech recognition. Many other applications exist as well. But one of the most interesting applications of neural networks (and deep learning) is in creating data via Generative adversarial networks (GANs). GANs can be used to generate password guesses and evasive malware. For more details, I’ll refer you to the Machine Learning for Cybersecurity Cookbook. The next class of models that perform well are tree-based. These include Random Forests and gradient boosting trees. These perform well on structured data with many features. For example, the PE header of PE files (including malware) can be featurized, yielding ~70 numerical features. It is convenient and effective to construct an XGBoost model (a gradient-boosting model) or a Random Forest model on this data, and the odds are good that performance will be unbeatable by other algorithms. Next there is clustering. Clustering shines when you would like to segment a population automatically. For example, you might have a large collection of malware samples, and you would like to classify them into families. Clustering is a natural choice for this problem. Anomaly detection lets you fight off unseen and unknown threats. For instance, when a hacker utilizes a new tactic to intrude on your network, an anomaly detection algorithm can protect you even if this new tactic has not been documented. Finally, RL algorithms perform well on dynamic problems. The situation can be, for example, a penetration test on a network. The DeepExploit framework, covered in the book, utilizes an RL agent on top of metasploit to learn from prior pen tests and becomes better and better at finding vulnerabilities. Generative Adversarial Networks (GANs) are a popular branch of ML used to train systems against counterfeit data. How can these help in malware detection and safeguarding systems to identify correct intrusion? A good way to think about GANs is as a pair of neural networks, pitted against each other. The loss of one is the objective of the other. As the two networks are trained, each becomes better and better at its job. We can then take whichever side of the “tug of war” battle, separate it from its rival, and use it. In other cases, we might choose to “freeze” one of the networks, meaning that we do not train it, but only use it for scoring. In the case of malware, the book covers how to use MalGAN, which is a GAN for malware evasion. One network, the detector, is frozen. In this case, it is an implementation of MalConv. The other network, the adversarial network, is being trained to modify malware until the detection score of MalConv drops to zero. As it trains, it becomes better and better at this. In a practical situation, we would want to unfreeze both networks. Then we can take the trained detector, and use it as part of our anti-malware solution. We would then be confident knowing that it is very good at detecting evasive malware. The same ideas can be applied in a range of cybersecurity contexts, such as intrusion and deepfakes. On how Machine Learning for Cybersecurity Cookbook can help with easy implementation of ML for Cybersecurity problems What are some of the tools/ recipes mentioned in your book that can help cybersecurity professionals to easily implement machine learning and make it a part of their day-to-day activities? The Machine Learning for Cybersecurity Cookbook offers an astounding 80+ recipes. Themost applicable recipes will vary between individual professionals, and even for each individual different recipes will be applicable at different times in their careers. For a cybersecurity professional beginning to work with malware, the fundamentals chapter, chapter 2:ML-based Malware Detection, provides a solid and excellent start to creating a malware classifier. For more advanced malware analysts, Chapter 3:Advanced Malware Detection will offer more sophisticated and specialized techniques, such as dealing with obfuscation and script malware. Every cybersecurity professional would benefit from getting a firm grasp of chapter 4, “ML for Social Engineering”. In fact, anyone at all should have an understanding of how ML can be used to trick unsuspecting users, as part of their cybersecurity education. This chapter really shows that you have to be cautious because machines are becoming better at imitating humans. On the other hand, ML also provides the tools to know when such an attack is being performed. Chapter 5, “Penetration Testing Using ML” is a technical chapter, and is most appropriate to cybersecurity professionals that are concerned with pen testing. It covers 10 ways in which pen testing can be improved by using ML, including neural network-assisted fuzzing and DeepExploit, a framework that utilizes a reinforcement learning (RL) agent on top of metasploit to perform automatic pen testing. Chapter 6, “Automatic Intrusion Detection” has a wider appeal, as a lot of cybersecurity professionals have to know how to defend a network from intruders. They would benefit from seeing how to leverage ML to stop zero-day attacks on their network. In addition, the chapter covers many other use cases, such as spam filtering, Botnet detection and Insider Threat detection, which are more useful to some than to others. Chapter 7, “Securing and Attacking Data with ML” provides great content to cybersecurity professionals interested in utilizing ML for improving their password security and other forms of data security. Chapter 8, “Secure and Private AI”, is invaluable to data scientists in the field of cybersecurity. Recipes in this chapter include Federated Learning and differential privacy (which allow to train an ML model on clients’ data without compromising their privacy) and testing adversarial robustness (which allows to improve the robustness of ML models to adversarial attacks). Your book talks about using machine learning to generate custom malware to pentest security. Can you elaborate on how this works and why this matters? As a general rule, you want to find out your vulnerabilities before someone else does (who might be up to no-good). For that reason, pen testing has always been an important step in providing security. To pen test your Antivirus well, it is important to use the latest techniques in malware evasion, as the bad guys will certainly try them, and these are deep learning-based techniques for modifying malware. On Emmanuel’s personal achievements in the Cybersecurity domain Dr. Tsukerman, in 2017, your anti-ransomware product was listed in the ‘Top 10 ransomware products of 2018’ by PC Magazine. In your experience, why are ransomware attacks on the rise and what makes an effective anti-ransomware product? Also, in 2018,  you designed an ML-based, instant-verdict malware detection system for Palo Alto Networks' WildFire service of over 30,000 customers. Can you tell us more about this project? If you monitor cybersecurity news, you would see that ransomware continues to be a huge threat. The reason is that ransomware offers cybercriminals an extremely attractive weapon. First, it is very difficult to trace the culprit from the malware or from the crypto wallet address. Second, the payoffs can be massive, be it from hitting the right target (e.g., a HIPAA compliant healthcare organization) or a large number of targets (e.g., all traffic to an e-commerce web page). Thirdly, ransomware is offered as a service, which effectively democratizes it! On the flip side, a lot of the risk of ransomware can be mitigated through common sense tactics. First, backing up one’s data. Second, having an anti-ransomware solution that provides guarantees. A generic antivirus can provide no guarantee - it either catches the ransomware or it doesn’t. If it doesn’t, your data is toast. However, certain anti-ransomware solutions, such as the one I have developed, do offer guarantees (e.g., no more than 0.1% of your files lost). Finally, since millions of new ransomware samples are developed each year, the malware solution must include a machine learning component, to catch the zero-day samples, which is another component of the anti-ransomware solution I developed. The project at Palo Alto Networks is a similar implementation of ML for malware detection. The one difference is that unlike the anti-ransomware service, which is an endpoint security tool, it offers protection services from the cloud. Since Palo Alto Networks is a firewall-service provider, that makes a lot of sense, since ideally, the malicious sample will be stopped at the firewall, and never even reach the endpoint. To learn how to implement the techniques discussed in this interview, grab your copy of the Machine Learning for Cybersecurity Cookbook Don’t wait - the bad guys aren’t waiting. Author Bio Emmanuel Tsukerman graduated from Stanford University and obtained his Ph.D. from UC Berkeley. In 2017, Dr. Tsukerman's anti-ransomware product was listed in the Top 10 ransomware products of 2018 by PC Magazine. In 2018, he designed an ML-based, instant-verdict malware detection system for Palo Alto Networks' WildFire service of over 30,000 customers. In 2019, Dr. Tsukerman launched the first cybersecurity data science course. About the book Machine Learning for Cybersecurity Cookbook will guide you through constructing classifiers and features for malware, which you'll train and test on real samples. You will also learn to build self-learning, reliant systems to handle cybersecurity tasks such as identifying malicious URLs, spam email detection, intrusion detection, network protection, and tracking user and process behavior, and much more! DevSecOps and the shift left in security: how Semmle is supporting software developers [Podcast] Elastic marks its entry in security analytics market with Elastic SIEM and Endgame acquisition Businesses are confident in their cybersecurity efforts, but weaknesses prevail
Read more
  • 0
  • 0
  • 5622

article-image-key-skills-for-data-professionals-to-learn-in-2020
Richard Gall
20 Dec 2019
6 min read
Save for later

Key skills for data professionals to learn in 2020

Richard Gall
20 Dec 2019
6 min read
It’s easy to fall into the trap of thinking about your next job, or even the job after that. It’s far more useful, however, to think more about the skills you want and need to learn now. This will focus your mind and ensure that you don’t waste time learning things that simply aren’t helpful. It also means you can make use of the things you’re learning almost immediately. This will make you more productive and effective - and who knows, maybe it will make the pathway to your future that little bit clearer. So, to help you focus, here are some of the things you should focus on learning as a data professional. Reinforcement learning Reinforcement learning is one of the most exciting and cutting-edge areas of machine learning. Although the area itself is relatively broad, the concept itself is fundamentally about getting systems to ‘learn’ through a process of reward. Because reinforcement learning focuses on making the best possible decision at a given moment, it naturally finds many applications where decision making is important. This includes things like robotics, digital ad-bidding, configuring software systems, and even something as prosaic as traffic light control. Of course, the list of potential applications for reinforcement learning could be endless. To a certain extent, the real challenge with it is finding new use cases that are relevant to you. But to do that, you need to learn and master it - so make 2020 the year you do just that. Get to grips with reinforcement learning with Reinforcement Learning Algorithms with Python. Learn neural networks Neural networks are closely related to reinforcement learning - they’re essentially another element within machine learning. However, neural networks are even more closely aligned with what we think of as typical artificial intelligence. Indeed, even the name itself hints at the fact that these systems are supposed to in some way mimic the human brain. Like reinforcement learning, there are a number of different applications for neural networks. These include image and language processing, as well as forecasting. The complexity of relationships that can be figured inside neural networks systems is useful for handling data with many different variables and intricacies that would otherwise be difficult to capture. If you want to find out how artificial intelligence really works under the hood, make sure you learn neural networks in 2020. Learn how to build real-world neural networks projects with Neural Network Projects with Python. Meta-learning Metalearning is another area of machine learning. It’s designed to help engineers and analysts to use the right machine learning algorithms for specific problems - it’s particularly important in automatic machine learning, where removing human agency from the analytical process can lead to the wrong systems being used on data. Meta learning does this by being applied to metadata about machine learning projects. This metadata will include information about the data, such as algorithm features, performance measures, and patterns identified previously. Once meta learning algorithms have ‘learned’ from this data, they should, in theory, be well optimized to run on other sets of data. It has been said that meta learning is important in the move towards generalized artificial intelligence, or AGI (intelligence that is more akin to human intelligence). This is because getting machines to learn about learning allow systems to move between different problems - something that is incredibly difficult with even the most sophisticated neural networks. Whether it will actually get us any closer to AGI is certainly open to debate, but if you want to be a part of the cutting edge of AI development, getting stuck into meta learning is a good place to begin in 2020. Find out how meta learning works in Hands-on Meta Learning with Python. Learn a new programming language Python is now the undisputed language of data. But that’s far from the end of the story - R still remains relevant in the field, and there are even reasons to use other languages for machine learning. It might not be immediately obvious - especially if you’re content to use R or Python for analytics and algorithmic projects - but because machine learning is shifting into many different fields, from mobile development to cybersecurity, learning how other programming languages can be used to build machine learning algorithms could be incredibly valuable. From the perspective of your skill set, it gives you a level of flexibility that will not only help you to solve a wider range of problems, but also stand out from the crowd when it comes to the job market. The most obvious non-obvious languages to learn for machine learning practitioners and other data professionals are Java and Julia. But even new and emerging languages are finding their way into machine learning - Go and Swift, for example, could be interesting routes to explore, particularly if you’re thinking about machine learning in production software and systems. Find out how to use Go for machine learning with Go Machine Learning Projects. Learn new frameworks For data professionals there are probably few things more important than learning new frameworks. While it’s useful to become a polyglot, it’s nevertheless true that learning new frameworks and ecosystem tools are going to have a more immediate impact on your work. PyTorch and TensorFlow should almost certainly be on your list for 2020. But we’ve mentioned them a lot recently, so it’s probably worth highlighting other frameworks worth your focus: Pandas, for data wrangling and manipulation, Apache Kafka, for stream-processing, scikit-learn for machine learning, and Matplotlib for data visualization. The list could be much, much longer: however, the best way to approach learning a new framework is to start with your immediate problems. What’s causing issues? What would you like to be able to do but can’t? What would you like to be able to do faster? Explore TensorFlow eBooks and videos on the Packt store. Learn how to develop and communicate a strategy It’s easy to just roll your eyes when someone talks about how important ‘soft skills’ are for data professionals. Except it’s true - being able to strategize, communicate, and influence, are what mark you out as a great data pro rather than a merely competent one. The phrase ‘soft skills’ is often what puts people off - ironically, despite the name they’re often even more difficult to master than technical skill. This is because, of course, soft skills involve working with humans in all their complexity. However, while learning these sorts of skills can be tough, it doesn’t mean it's impossible. To a certain extent it largely just requires a level of self-awareness and reflexivity, as well as a sensitivity to wider business and organizational problems. A good way of doing this is to step back and think of how problems are defined, and how they relate to other parts of the business. Find out how to deliver impactful data science projects with Managing Data Science. If you can master these skills, you’ll undoubtedly be in a great place to push your career forward as the year continues.
Read more
  • 0
  • 0
  • 4903

article-image-why-choose-opencv-over-matlab-for-your-next-computer-vision-project
Vincy Davis
20 Dec 2019
6 min read
Save for later

Why choose OpenCV over MATLAB for your next Computer Vision project

Vincy Davis
20 Dec 2019
6 min read
Scientific Computing relies on executing computer algorithms coded in different programming languages. One such interdisciplinary scientific field is the study of Computer Vision, often abbreviated as CV. Computer Vision is used to develop techniques that can automate tasks like acquiring, processing, analyzing and understanding digital images. It is also utilized for extracting high-dimensional data from the real world to produce symbolic information. In simple words, Computer Vision gives computers the ability to see, understand and process images and videos like humans. The vast advances in hardware, machine learning tools, and frameworks have resulted in the implementation of Computer Vision in various fields like IoT, manufacturing, healthcare, security, etc. Major tech firms like Amazon, Google, Microsoft, and Facebook are investing immensely in the research and development of this field. Out of the many tools and libraries available for Computer Vision nowadays, there are two major tools OpenCV and Matlab that stand out in terms of their speed and efficiency. In this article, we will have a detailed look at both of them. Further Reading [box type="shadow" align="" class="" width=""]To learn how to build interesting image recognition models like setting up license plate recognition using OpenCV, read the book “Computer Vision Projects with OpenCV and Python 3” by author Matthew Rever. The book will also guide you to design and develop production-grade Computer Vision projects by tackling real-world problems.[/box] OpenCV: An open-source multiplatform solution tailored for Computer Vision OpenCV, developed by Intel and now supported by Willow Garage, is released under the BSD 3-Clause license and is free for commercial use. It is one of the most popular computer vision tools aimed at providing a well-optimized, well tested, and open-source (C++)-based implementation for computer vision algorithms. The open-source library has interfaces for multiple languages like C++, Python, and Java and supports Linux, macOS, Windows, iOS, and Android. Many of its functions are implemented on GPU. The first stable release of OpenCV version 1.0 was in the year 2006. The OpenCV community has grown rapidly ever since and with its latest release, OpenCV version 4.1.1, it also brings improvements in the dnn (Deep Neural Networks) module, which is a popular module in the library that implements forward pass (inferencing) with deep networks, which are pre-trained using popular deep learning frameworks.  Some of the features offered by OpenCV include: imread function to read the images in the BGR (Blue-Green-Red) format by default. Easy up and downscaling for resizing an image. Supports various interpolation and downsampling methods like INTER_NEAREST to represent the nearest neighbor interpolation. Supports multiple variations of thresholding like adaptive thresholding, bitwise operations, edge detection, image filtering, image contours, and more. Enables image segmentation (Watershed Algorithm) to classify each pixel in an image to a particular class of background and foreground. Enables multiple feature-matching algorithms, like brute force matching, knn feature matching, among others. With its active community and regular updates for Machine Learning, OpenCV is only going to grow by leaps and bounds in the field of Computer Vision projects.  MATLAB: A licensed quick prototyping tool with OpenCV integration One disadvantage of OpenCV, which makes novice computer vision users tilt towards Matlab is the former's complex nature. OpenCV is comparatively harder to learn due to lack of documentation and error handling codes. Matlab, developed by MathWorks is a proprietary programming language with a multi-paradigm numerical computing environment. It has over 3 million users worldwide and is considered one of the easiest and most productive software for engineers and scientists. It has a very powerful and swift matrix library.  Matlab also works in integration with OpenCV. This enables MATLAB users to explore, analyze, and debug designs that incorporate OpenCV algorithms. The support package of MATLAB includes the data type conversions necessary for MATLAB and OpenCV. MathWorks provided Computer Vision Toolbox renders algorithms, functions, and apps for designing and testing computer vision, 3D vision, and video processing systems. It also allows detection, tracking, feature extraction, and matching of objects. Matlab can also train custom object detectors using deep learning and machine learning algorithms such as YOLO v2, Faster R-CNN, and ACF. Most of the toolbox algorithms in Matlab support C/C++ code generation for integrating with existing code, desktop prototyping, and embedded vision system deployment. However, Matlab does not contain as many functions for computer vision as OpenCV, which has more of its functions implemented on GPU. Another issue with Matlab is that it's not open-source, it’s license is costly and the programs are not portable.  Another important factor which matters a lot in computer vision is the performance of a code, especially when working on real-time video processing.  Which has a faster execution time? OpenCV or Matlab? Along with Computer Vision, other fields also require faster execution while choosing a programming language or library for implementing any function. This factor is analyzed in detail in a paper titled “Matlab vs. OpenCV: A Comparative Study of Different Machine Learning Algorithms”.  The paper provides a very practical comparative study between Matlab and OpenCV using 20 different real datasets. The differentiation is based on the execution time for various machine learning algorithms like Classification and Regression Trees (CART), Naive Bayes, Boosting, Random Forest and K-Nearest Neighbor (KNN). The experiments were run on an Intel core 2 duo P7450 machine, with 3GB RAM, and Ubuntu 11.04 32-bit operating system on Matlab version 7.12.0.635 (R2011a), and OpenCV C++ version 2.1.  The paper states, “To compare the speed of Matlab and OpenCV for a particular machine learning algorithm, we run the algorithm 1000 times and take the average of the execution times. Averaging over 1000 experiments is more than necessary since convergence is reached after a few hundred.” The outcome of all the experiments revealed that though Matlab is a successful scientific computing environment, it is outrun by OpenCV for almost all the experiments when their execution time is considered. The paper points out that this could be due to a combination of a number of dimensionalities, sample size, and the use of training sets. One of the listed machine learning algorithms KNN produced a log time ratio of 0.8 and 0.9 on datasets D16 and D17 respectively.  Clearly, Matlab is great for exploring and fiddling with computer vision concepts as researchers and students at universities that can afford the software. However, when it comes to building production-ready real-world computer vision projects, OpenCV beats Matlab hand down. You can learn about building more Computer Vision projects like human pose estimation using TensorFlow from our book ‘Computer Vision Projects with OpenCV and Python 3’. Master the art of face swapping with OpenCV and Python by Sylwek Brzęczkowski, developer at TrustStamp NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI Generating automated image captions using NLP and computer vision [Tutorial] Computer vision is growing quickly. Here’s why. Introducing Intel’s OpenVINO computer vision toolkit for edge computing
Read more
  • 0
  • 0
  • 5443

article-image-uber-ai-labs-senior-research-scientist-ankit-jain-tensorflow-updates-learning-machine-learning
Sugandha Lahoti
19 Dec 2019
10 min read
Save for later

Uber AI Labs senior research scientist, Ankit Jain on TensorFlow updates and learning machine learning by doing [Interview]

Sugandha Lahoti
19 Dec 2019
10 min read
No doubt, TensorFlow is one of the most popular machine learning libraries right now. However, newbie developers who want to experiment with TensorFlow often face difficulties in learning TensorFlow, relying just on tutorials.  Recently, we sat down with Ankit Jain, senior research scientist at Uber AI Labs and one of the authors of the book, TensorFlow Machine Learning Projects. Ankit talked about how real-world implementations can be a good way to learn for those developing TF models, specifically the ‘learn by doing’ approach. Talking about TensorFlow 2.0, he considers ‘eager execution by default’ a major paradigm shift and is all game for interoperability between TF 2.0 and other machine learning frameworks. He also gave us an insight into the limitations of AI algorithms (generalization, AI ethics, labeled data to name a few). Continue reading the full interview for a detailed perspective. On why TensorFlow 2 upgrade is paradigm-shifting in more ways than one TensorFlow 2 was released last month. What are some of your top features in TensorFlow 2.0? How do you think it has upgraded the machine learning ecosystem? TF 2.0 is a major upgrade from its predecessor in many ways. It addressed many of the shortcomings of TF 1.x and with this release, the difference between Pytorch and TF has narrowed. One of the biggest paradigm shifts in TF 2.0 is eager execution by default. This means you don’t have to pre-define a static computation graph, create sessions, deal with the unintuitive interface or have painful experience in debugging your deep learning model code. However, you lose on some performance in run time when you switch to complete eager mode. For that purpose, they have introduced tf.function decorator which can help you translate your Python functions to Tensorflow graphs. This way you can retain both code readability and ease of debugging while getting the performance of TensorFlow graphs.  Another major update is that many confusing redundancies have been consolidated and many functions are now integrated with Keras API. This will help to standardize the communication of data/models among various components of TensorFlow ecosystem. TF 2.0 also comes with backward compatibility to TF 1.X with an easy optional way to convert your TF 1.X code into TF 2.0. TF 1.X suffered from a lack of standardization in how we load/save trained machine learning models. TF 2.0 fixed this by defining a single API SavedModels. As SavedModels is integrated with the Tensorflow ecosystem, it becomes much easier to deploy models using Tensorflow Lite, Tensorflow.js to other devices/applications.   With the onset of TensorFlow 2, Tensorflow and Keras are integrated into one module (tf.keras). TF 2.0 now delivers Keras as the central high-level API used to build and train models. What is the future/benefits of TensorFlow + Keras?  Keras has been a very popular high-level API for faster prototyping and production and even for research. As the field of AI/ML is in nascent stages, ease of development can have a huge impact for people getting started in machine learning.  Previously, a developer new to machine learning started from Keras while an experienced researcher used only Tensorflow 1.x due to its flexibility to build custom models. With Keras integrated as a high level API for TF 2.0, we can expect both beginners and experts working on the same framework which can lead to better collaboration and better exchange of ideas in the community.  Additionally, a single high level easy to use API reduces confusion and streamlines consistency across use cases of production and research.  Overall, I think it’s a great step in the right direction by Google which will enable more developers to hop on the Tensorflow ecosystem.  On TensorFlow, NLP and structured learning Recently, Transformers 2.0, a popular OS NLP library, was released that provides TF 2.0 and PyTorch deep interoperability. What are your views on this development? One of the areas where deep learning has made an immense impact is Natural Language Processing (NLP). Research in NLP is moving very fast and it is hard to keep up with all the papers and code releases by various research groups around the world.  Hugging Face, the company behind the library “Transformers” has really eased the usage of state of the art (SOTA) models and process of building new models by simplifying the preprocessing and model building pipeline through an easy to use Keras like interface. “Transformers 2.0” is the recent release from the company and the most important feature is the interoperability between Pytorch and TF 2.0. TF 2.0 is more production-ready while Pytorch is more oriented towards research. With this upgrade, you can pretty much move from one framework to another for training, validation, and deployment of the model.  Interoperability between frameworks is very important for the AI community as it enables development velocity. Moreover, as none of the frameworks can be perfect at everything, it makes the framework developers focus more on their strengths and make those features seamless. This will create greater efficiency going forward. Overall, I think this is a great development and I expect other libraries in domains like Computer Vision, Graph Learning etc. to follow suit. This will enable a lot more application of state of the art models to production.  Google recently launched Neural Structured Learning (NSL), an open-source Tensorflow based framework for training neural networks with graphs and structured data. What are some of the potential applications of NSL? What do you think can be some Machine Learning Projects based around NSL? Neural structured learning is a concept of learning neural network parameters with structured signals other than features. Many real-world datasets contain some structured information like Knowledge graphs or molecular graphs in biology. Incorporating these signals can lead to a more accurate and robust model. From an implementation perspective, it boils down to adding a regularizer to the loss function such that the representation of neighboring nodes in the graph is similar.  Any application where the amount of labeled data is limited but has structural information like Knowledge Graph that can be exploited is a good candidate for these types of models. A possible example could be fraud detection in online systems. Fraud data generally has sparse labels and fraudsters create multiple accounts that are connected to each other through some information like devices etc. This structured information can be utilized to learn a better representation of fraud accounts.  There can be other applications is molecular data and other problems involving the knowledge graph. On Ankit’s experience working on his book, TensorFlow Machine Learning Project Tell us the motivation behind writing your book TensorFlow Machine Learning Projects. Why is TensorFlow ideal for building ML projects? What are some of your favorite machine learning projects from this book? When I started learning Tensorflow, I stumbled upon many tutorials (including the official ones) which explained various concepts on how Tensorflow works. While that was helpful in understanding the basics, most of my learning came from building projects with Tensorflow. That is when I realized the need for a resource that teaches using a ‘learn by doing’ approach. This book is unique in the way that it teaches machine learning theory, Tensorflow utilities and programming concepts all while developing a project in which you can have fun building and is also of practical use.  My favorite chapter from the book is “Generating Uncertainty in Traffic Signs Classifier using Bayesian Neural Networks”. With the development of self-driving cars, traffic signs detection is a major problem that needs to be solved. This chapter explains an advanced AI concept of Bayesian Neural Networks and shows step by step how to use those to detect traffic signs using Tensorflow. Some of the readers of the book have started to use this concept in their practical applications already. Machine Learning challenges and advice to those developing TensorFlow models What are the biggest challenges today in the field of Machine Learning and AI? What do you see as the greatest technology disruptors in the next 5 years? While AI and machine learning has seen huge success in recent years, there are few limitations of AI algorithms as we see today. Some of the major ones are: Labeled Data: Most of the success of AI has come from supervised learning. Many of the recent supervised deep learning algorithms require huge quantities of labeled data which is expensive to obtain. For example, obtaining huge amounts of clinical trial data for healthcare prediction is very challenging. The good news is that there is some research around building good ML models using sparse data labels. Explainability: Deep learning models are essentially a “black box” where you don’t know what factor(s) led to the prediction. For some applications like money lending, disease diagnosis, fraud detection etc. the explanations of predictions become very important. Currently, we see some nascent work in this direction with LIME and SHAP libraries. Generalization: In the current state of AI, we build one model for each application. We still don’t have good generality of models from one task to another. Generalization, if solved, can lead us to truly Artificial General Intelligence (AGI). Thankfully approaches like transfer learning and meta-learning are trying to solve this challenge. Bias, Fairness, and Ethics: An output of the machine learning model is heavily based on the input training data. Many a time, training data can have biases towards particular ethnicities, classes, religions, etc. We need more solutions in this direction to build trust in AI algorithms. Overall, I feel, AI is becoming mainstream and in the next 5 years we will see many traditional industries adopt AI to solve critical business problems and achieve more automation. At the same time, tooling for AI will keep on improving which will also help in its adoption. What is your advice for those developing machine learning projects on TensorFlow? Building projects with new techniques and technologies is a hard process. It requires patience, dealing with failures and hard work. For that reason, it is very important to pick up a project that you are passionate about. This way, you will continue building even if you are stuck somewhere. The selection of the right projects is by far the most important criterion in the project-based learning method.  About the Author Ankit currently works as a Senior Research Scientist at Uber AI Labs, the machine learning research arm of Uber. His work primarily involves the application of Deep Learning methods to a variety of Uber’s problems ranging from food recommendation system, forecasting to self-driving cars.  Previously, he has worked in a variety of data science roles at Bank of America, Facebook and other startups. Additionally, he has been a featured speaker in many of the top AI conferences and universities across the US, including UC Berkeley, OReilly AI conference etc. He completed his MS from UC Berkeley and a BS from IIT Bombay (India). You can find him on Linkedin, Twitter, and GitHub. About the Book With the help of this book, TensorFlow Machine Learning Projects you’ll not only learn how to build advanced projects using different datasets but also be able to tackle common challenges using a range of libraries from the TensorFlow ecosystem. To start with, you’ll get to grips with using TensorFlow for machine learning projects; you’ll explore a wide range of projects using TensorForest and TensorBoard for detecting exoplanets, TensorFlow.js for sentiment analysis, and TensorFlow Lite for digit classification. As you make your way through the book, you’ll build projects in various real-world domains. By the end of this book, you’ll have gained the required expertise to build full-fledged machine learning projects at work.  
Read more
  • 0
  • 0
  • 4349
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-data-science-and-machine-learning-what-to-learn-in-2020
Richard Gall
19 Dec 2019
5 min read
Save for later

Data science and machine learning: what to learn in 2020

Richard Gall
19 Dec 2019
5 min read
It’s hard to keep up with the pace of change in the data science and machine learning fields. And when you’re under pressure to deliver projects, learning new skills and technologies might be the last thing on your mind. But if you don’t have at least one eye on what you need to learn next you run the risk of falling behind. In turn this means you miss out on new solutions and new opportunities to drive change: you might miss the chance to do things differently. That’s why we want to make it easy for you with this quick list of what you need to watch out for and learn in 2020. The growing TensorFlow ecosystem TensorFlow remains the most popular deep learning framework in the world. With TensorFlow 2.0 the Google-based development team behind it have attempted to rectify a number of issues and improve overall performance. Most notably, some of the problems around usability have been addressed, which should help the project’s continued growth and perhaps even lower the barrier to entry. Relatedly TensorFlow.js is proving that the wider TensorFlow ecosystem is incredibly healthy. It will be interesting to see what projects emerge in 2020 - it might even bring JavaScript web developers into the machine learning fold. Explore Packt's huge range of TensorFlow eBooks and videos on the store. PyTorch PyTorch hasn’t quite managed to topple TensorFlow from its perch, but it’s nevertheless growing quickly. Easier to use and more accessible than TensorFlow, if you want to start building deep learning systems quickly your best bet is probably to get started on PyTorch. Search PyTorch eBooks and videos on the Packt store. End-to-end data analysis on the cloud When it comes to data analysis, one of the most pressing issues is to speed up pipelines. This is, of course, notoriously difficult - even in organizations that do their best to be agile and fast, it’s not uncommon to find that their data is fragmented and diffuse, with little alignment across teams. One of the opportunities for changing this is cloud. When used effectively cloud platforms can dramatically speed up analytics pipelines and make it much easier for data scientists and analysts to deliver insights quickly. This might mean that we need increased collaboration between data professionals, engineers, and architects, but if we’re to really deliver on the data at our disposal, then this shift could be massive. Learn how to perform analytics on the cloud with Cloud Analytics with Microsoft Azure. Data science strategy and leadership While cloud might help to smooth some of the friction that exists in our organizations when it comes to data analytics, there’s no substitute for strong and clear leadership. The split between the engineering side of data and the more scientific or interpretive aspect has been noted, which means that there is going to be a real demand for people that have a strong understanding of what data can do, what it shows, and what it means in terms of action. Indeed, the article just linked to also mentions that there is likely to be an increasing need for executive level understanding. That means data scientists have the opportunity to take a more senior role inside their organizations, by either working closely with execs or even moving up to that level. Learn how to build and manage a data science team and initiative that delivers with Managing Data Science. Going back to the algorithms In the excitement about the opportunities of machine learning and artificial intelligence, it’s possible that we’ve lost sight of some of the fundamentals: the algorithms. Indeed, given the conversation around algorithmic bias, and unintended consequences it certainly makes sense to place renewed attention on the algorithms that lie right at the center of our work. Even if you’re not an experienced data analyst or data scientist, if you’re a beginner it’s just as important to dive deep into algorithms. This will give you a robust foundation for everything else you do. And while statistics and mathematics will feel a long way from the supposed sexiness of data science, carefully considering what role they play will ensure that the models you build are accurate and perform as they should. Get stuck into algorithms with Data Science Algorithms in a Week. Computer vision and natural language processing Computer vision and Natural Language Processing are two of the most exciting aspects of modern machine learning and artificial intelligence. Both can be used for analytics projects, but they also have applications in real world digital products. Indeed, with augmented reality and conversational UI becoming more and more common, businesses need to be thinking very carefully about whether this could give them an edge in how they interact with customers. These sorts of innovations can be driven from many different departments - but technologists and data professionals should be seizing the opportunity to lead the way on how innovation can transform customer relationships. For more technology eBooks and videos to help you prepare for 2020, head to the Packt store.
Read more
  • 0
  • 0
  • 6020

article-image-artificial-intelligence-data-science-and-big-data-in-2019-what-really-mattered
Richard Gall
16 Dec 2019
6 min read
Save for later

Artificial intelligence, data science, and big data in 2019: what really mattered

Richard Gall
16 Dec 2019
6 min read
The techlash hasn’t died down - it’s just become normalized. Barely a day passes without a new scandal emerging, from questionable surveillance to racist AI algorithms. But it hasn’t all been bad: while negatives get a lot of attention (and so they should - the consequences of tech can be lethal, both societally and literally), there was still plenty to get excited about. And for those working in the data profession - as analysts, scientists, and engineers, there were several important trends that really helped to define where we are now from a purely practical perspective - as well as hinting at where we might go in the future. With just a few weeks left to go of the year (and the decade!), let’s look at some of the key things that defined this year in the field of data science and data engineering. The growth of PyTorch TensorFlow is undoubtedly the most popular deep learning framework. You might even say that its role in popularizing deep learning and artificial intelligence has been understated. But while TensorFlow has held its place for some time, 2019 was the year when things started to change. Look, for example at this Google Trends graph (and yes, I know it’s not in any way scientific): As you can see TensorFlow hit its stride pretty early on. It’s only in the last 12 months or so that PyTorch has been narrowing the gap. One of the reasons for this is the fact that PyTorch 1.0 was released at the end of last year. This has been the foundation that has spurred its growth over the last 12 months, effectively announcing its ‘official’ arrival on the scene. With Facebook (PyTorch’s creator) building on this foundation throughout the year with a few small but important releases. PyTorch 1.3, for example, which was released at the PyTorch Developer Conference in October, included a number of ‘experimental’ new features, including named tensors and PyTorch Mobile. Another reason for PyTorch’s growth this year is that it is finding traction in the research field. This article provides some hard data that proves that PyTorch is starting to grow in this area, citing the tool’s comparable simplicity, API and performance as the reasons that it’s undermining TensorFlow’s utter dominance of the field. Find our PyTorch bundle, and other data bundles, here. Grab 5 titles for just $25. TensorFlow 2.0 While PyTorch has grown significantly in 2019, TensorFlow is nevertheless still holding its place at the top of the deep learning rankings. And TensorFlow 2.0 has undoubtedly cemented its position. With the alpha release getting developers excited since March, the full launch of 2.0 marked an important milestone for the project. The key difference between TensorFlow 2.0 and 1.0 is ultimately accessibility and ease of use. Despite its massive popularity, TensorFlow 1.0 always had a reputation for being a little more difficult to use than many other deep learning tools. The team were clearly aware of this and have done a lot to make life easier for TensorFlow developers. “With tight integration of Keras into TensorFlow, eager execution by default, and Pythonic function execution,” the team write in the release notes, “TensorFlow 2.0 makes the experience of developing applications as familiar as possible for Python developers.” When placed alongside the exciting development of PyTorch, it’s clear that these two tools are going to be defining deep learning in the year - or years - to come. Get up to date with what's new in TensorFlow 2.0 with TensorFlow 2.0 Quick Start Guide. Stream processing with Kafka, Flink, and others Dealing with large quantities of data in real-time is now the cutting-edge of big data. It’s for this reason that this year we’ve started to see stream processing gain headway in the mainstream. Although it’s been an important technique for organizations with data-intensive needs, the use of cloud and hybrid solutions - as well as an overall awareness of the opportunities of real-time data - has become truly mainstream. In turn, this is giving new prominence to a range of stream-processing platforms. Kafka, Spark, and Flink are just three of the most well-known names in this space, but the market is undoubtedly growing. Another key driver here is Nvidia - as one of the leading hardware companies, it deserves a lot of credit for helping to make massive processing power accessible to organizations that wouldn’t have had a chance just a few years ago. With CUDA, Nvidia’s parallel programming paradigm for GPUs, the company is helping all sorts of users to leverage stream processing in different ways. Get started with Apache Kafka with Apache Kafka Quick Start Guide. Data analysis on the cloud Although I've already mentioned how influential TensorFlow was in popularizing deep learning, today public cloud is going even further. It’s making artificial intelligence and analytics accessible to new roles (thinking here about tools like Azure Machine Learning Studio and Amazon SageMaker), as well as making it easier to build and deploy machine learning models in applications and products. In recent weeks, Microsoft has made another step in its bid to eat into AWS’s market share with Azure Synapse. Essentially a next generation Azure SQL Warehouse, Synapse is designed to bridge the gap between data lake and data warehouse - so, offering massive scale, and improving analytical speed. It will be interesting to see how this plays with the wider market. AWS might respond with something similar - but the onus remains on Microsoft to shift mindshare; AWS will want to consolidate its powerful position. Security It would be wrong to suggest that security is a new issue in the world of data engineering and analytics. But in 2019 it’s become almost impossible to think about the two domains as separate from one another. This cuts two different ways: on the one hand the emphasis on securing data and protecting privacy has never been greater. On the other hand, artificial intelligence and machine learning have started to play a critical part in the way that we monitor and identify threats to our systems. To a certain extent this expresses the double bind that data poses: the amount of data at our disposal is a nightmare from a governance and architectural perspective, but it is, at the same time, a way of mitigating that very nightmare. All in all, then, a bit of a vicious cycle, but nevertheless a reminder that however big our data gets, and however much we try to automate, there will always be a need for humans to think creatively and strategically about how we actually go about solving problems. Explore Packt's security bundles now. For more technology eBooks and videos to prepare you for 2020, head to the Packt store.
Read more
  • 0
  • 0
  • 4513

article-image-challenge-deep-learning-sustain-current-pace-innovation-ivan-vasilev-machine-learning-engineer
Sugandha Lahoti
13 Dec 2019
8 min read
Save for later

“The challenge in Deep Learning is to sustain the current pace of innovation”, explains Ivan Vasilev, machine learning engineer

Sugandha Lahoti
13 Dec 2019
8 min read
If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender - the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results. To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs. On why he chose Computer Vision and NLP as two major focus areas of his book Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. “One of the reasons I emphasized computer vision and NLP”, he clarifies, “is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.” The other reason for focusing on Computer Vision, he says “is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.” On the ongoing battle between TensorFlow and PyTorch There are two popular machine learning frameworks that are currently at par - TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book. He explains, “On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).” Using Machine Learning in the stock trading process can make markets more efficient Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. “One reason”, Ivan states, “is that the field isn’t as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.” He adds, “Another reason is that quality training data isn’t freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.” “However”, he counters, “using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.” GANs can be used for nefarious purposes, but that doesn’t warrant discarding them Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse. Ivan acknowledges that GANs may have unintended outcomes but that shouldn’t be the sole reason to discard them. He says, “Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldn’t discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors won’t discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.” Machine learning can have both intentional and unintentional harmful effects Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, “We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future. “ “I don't think algorithmic bias is necessarily intentional,'' he says. “Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.” Challenges in the Machine learning ecosystem “The field of ML exploded (in a good sense) a few years ago,'' says Ivan, “thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.” “So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).” “Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and it’s hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.” This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivan’s work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more. Author Bio Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub. Kaggle’s Rachel Tatman on what to do when applying deep learning is overkill  Brad Miro talks TensorFlow 2.0 features and how Google is using it internally François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in deep learning and more
Read more
  • 0
  • 0
  • 4279

article-image-master-the-art-of-face-swapping-with-opencv-and-python-by-sylwek-brzeczkowski-developer-at-truststamp
Vincy Davis
12 Dec 2019
8 min read
Save for later

Master the art of face swapping with OpenCV and Python by Sylwek Brzęczkowski, developer at TrustStamp

Vincy Davis
12 Dec 2019
8 min read
No discussion on image processing can be complete without talking about OpenCV. Its 2500+ algorithms, extensive documentation and sample code are considered world-class for exploring real-time computer vision. OpenCV supports a wide variety of programming languages such as C++, Python, Java, etc., and is also available on different platforms including Windows, Linux, OS X, Android, and iOS. OpenCV-Python, the Python API for OpenCV is one of the most popular libraries used to solve computer vision problems. It combines the best qualities of OpenCV, C++ API, and the Python language. The OpenCV-Python library uses Numpy, which is a highly optimized library for numerical operations with a MATLAB-style syntax. This makes it easier to integrate the Python API with other libraries that use Numpy such as SciPy and Matplotlib. This is the reason why it is used by many developers to execute different computer vision experiments. Want to know more about OpenCV with Python? [box type="shadow" align="" class="" width=""]If you are interested in developing your computer vision skills, you should definitely master the algorithms in OpenCV 4 and Python explained in our book ‘Mastering OpenCV 4 with Python’ written by Alberto Fernández Villán. This book will help you build complete projects in relation to image processing, motion detection, image segmentation, and many other tasks by exploring the deep learning Python libraries and also by learning the OpenCV deep learning capabilities.[/box] At the PyData Warsaw 2018 conference, Sylwek Brzęczkowski walked through how to implement a face swap using OpenCV and Python. Face swaps are used by apps like Snapchat to dispense various face filters. Brzęczkowski is a Python developer at TrustStamp. Steps to implement face swapping with OpenCV and Python #1 Face detection using histogram of oriented gradients (HOG) Histogram of oriented gradients (HOG) is a feature descriptor that is used to detect objects in computer vision and image processing. Brzęczkowski demonstrated the working of a HOG using square patches which when hovered over an array of images produces a histogram of oriented gradients feature vectors. These feature vectors are then passed to the classifier to generate a result having the highest matching samples. In order to implement face detection using HOG in Python, the image needs to be imported using import OpenCV. Next a frontal face detector object is created for the loaded image detector=dlib.get_frontal_face_detector(). The detector then produces the vector with the detected face. #2 Facial landmark detection aka face alignment Face landmark detection is the process of finding points of interest in an image of a human face. When dlib is used for facial landmark detection, it returns 68 unique fashion landmarks for the whole face. After the first iteration of the algorithm, the value of T equals 0. This value increases linearly such that at the end of the iteration, T gets the value 10. The image evolved at this stage produces the ‘ground truth’, which means that the iteration can stop now. Due to this working, this stage of the process is also called as face alignment. To implement this stage, Brzęczkowski showed how to add a predictor in the Python program with the values shape_predictor_68_face_landmarks.dat such that it produces a model of around 100 megabytes. This process generally takes up a long time as we tend to pick the biggest clearer image for detection. #3 Finding face border using convex hull The convex hull is a set of points defined as the smallest convex polygon, which encloses all of the points in the set. This means that for a given set of points, the convex hull is the subset of these points such that all the given points are inside the subset. To find the face border in an image, we need to change the structure a bit. The structure is first passed to the convex hull function with return points to false, this means that we get an output of indexes. Brzęczkowski then exhibited the face border in the image in blue color using the find_convex_hull.py function. #4 Approximating nonlinear operations with linear operations In a linear filtering of an image, the value of an output pixel is a linear combination of the values of the pixels. Brzęczkowski put forth the example of Affine transformation which is a type of linear mapping method and is used to preserve points, straight lines, and planes. On the other hand, a non-linear filtering produces an output which is not a linear function of its input. He then goes on to unveil both the transitions using his own image. Brzęczkowski then advised users to check the website learnOpenCV.com to learn how to create a nonlinear operation with a linear one. #5 Finding triangles in an image using Delaunay triangulation A Delaunay triangulation subdivides a set of points in a plane into triangles such that the points become vertices of the triangles. This means that this method subdivides the space or the surface into triangles in such a way that if you look at any triangle on the image, it will not have another point inside the triangle. Brzęczkowski then demonstrates how the image developed in the previous stage contained “face points from which you can identify my teeth and then create sub div to the object, insert all these points that I created or all detected.” Next, he deploys Delaunay triangulation to produce a list of two angles. This list is then used to obtain the triangles in the image. Post this step, he uses the delaunay_triangulation.py function to generate these triangles on the images. #6 Blending one face into another To recap, we started from detecting a face using HOG and finding its border using convex hull, followed it by adding mouth points to indicate specific indexes. Next, Delaunay triangulation was implemented to obtain all the triangles on the images. Next, Brzęczkowski begins the blending of images using seamless cloning. A seamless cloning combines the attributes of other cloning methods to create a unique solution to allow “sequence-independent and scarless insertion of one or more fragments of DNA into a plasmid vector.” This cloning method also provides a variety of skin colors to choose from. Brzęczkowski then explains a feature called ‘pass on edit image’ in the Poisson image editing which uses the value of the gradients instead of the identities or the values of the pixels of the image. To implement the same method in OpenCV, he further demonstrates how information like source, destination, source image destination, mask and center (which is the location where the cloned part should be placed) is required to blend the two faces. Brzęczkowski then depicts a  string of illustrations to transform his image with the images of popular artists like Jamie Foxx, Clint Eastwood, and others. #7 Stabilization using optical flow with the Lucas-Kanade method In computer vision, the Lucas-Kanade method is a widely used differential method for optical flow estimation. It assumes that the flow is essentially constant in a local neighborhood of the pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighborhood, by the least-squares criterion. Thus by combining information from several nearby pixels, the Lucas–Kanade method resolves the inherent ambiguity of the optical flow equation. This method is also less sensitive to noises in an image. By using this method to implement the stabilization of the face swapped image, it is assumed that the optical flow is essentially constant in a local neighborhood of the pixel under consideration in human language. This means that “if we have a red point in the center we assume that all the points around, let's say in this example is three on three pixels we assume that all of them have the same optical flow and thanks to that assumption we have nine equations and only two unknowns.” This makes the computation fairly easy to solve. By using this assumption the optical flow works smoothly if we have the previous gray position of the image. This means that for face swapping images using OpenCV, a user needs to have details of the previous points of the image along with the current points of the image. By combining all this information, the actual point becomes a combination of the detected landmark and the predicted landmark. Thus by implementing the Lucas-Kanade method for stabilizing the image, Brzęczkowski implements a non-shaky version of his face-swapped image. Watch Brzęczkowski’s full video to see a step-by-step implementation of a face-swapping task. You can learn advanced applications like facial recognition, target tracking, or augmented reality from our book, ‘Mastering OpenCV 4 with Python’ written by Alberto Fernández Villán. This book will also help you understand the application of artificial intelligence and deep learning techniques using popular Python libraries like TensorFlow and Keras. Getting to know PyMC3, a probabilistic programming framework for Bayesian Analysis in Python How to perform exception handling in Python with ‘try, catch and finally’ Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] OpenCV 4.0 releases with experimental Vulcan, G-API module and QR-code detector among others
Read more
  • 0
  • 0
  • 8654
article-image-getting-to-know-pymc3-a-probabilistic-programming-framework-for-bayesian-analysis-in-python
Vincy Davis
11 Dec 2019
5 min read
Save for later

Getting to know PyMC3, a probabilistic programming framework for Bayesian Analysis in Python

Vincy Davis
11 Dec 2019
5 min read
Bayes' theorem, named after 18th-century British mathematician Thomas Bayes, is a mathematical formula for determining conditional probability. This theorem is used to revise or update existing predictions or theories using new or additional evidence. Bayes theorem is also used in the field of data science as it provides a rule for moving from a prior probability to a posterior probability.  In Bayesian statistics, a prior probability is the probability of an event before a new data is collected and a posterior probability is a conditional probability that is allotted after the relevant evidence is acquired. Hence, the Bayes algorithm is one of the most popular machine learning techniques in the field of data science.  In this post, we are going to discuss a specific Bayesian implementation called probabilistic programming (PP) in Python, considering that modern Bayesian statistics is mainly done by writing code. The probabilistic programming enables flexible specification of complex Bayesian statistical models, thus giving users the ability to focus more on model design, evaluation, and interpretation, and less on mathematical or computational details. Further Reading [box type="shadow" align="" class="" width=""]To know more about Bayesian data analysis techniques using PyMC3 and ArviZ, read our book ‘Bayesian Analysis with Python’, written by Osvaldo Martin. This book will help you acquire skills for a practical and computational approach towards Bayesian statistical modeling. The book also lists the best practices in Bayesian Analysis with the help of sample problems and practice exercises.[/box] A group of researchers have published a paper “Probabilistic Programming in Python using PyMC” exhibiting a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. PyMC3 is a popular open-source PP framework in Python with an intuitive and powerful syntax closer to the natural syntax statisticians. The PyMC3 installation depends on several third-party Python packages which are automatically installed when installing via pip. It requires four dependencies: Theano, NumPy, SciPy, and Matplotlib. To undertake the full advantage of PyMC3, the researchers suggest, the optional dependencies Pandas and Patsy should also be installed using: pip install patsy pandas. How to use PyMC3 in probabilistic programming? In the paper, the researchers have utilized a simple Bayesian linear regression model with normal priors for the parameters. The unknown variables in the model are also assigned a prior distribution. The artificial data in the model are then simulated using NumPy’s random module, followed by the PyMC3 model to retrieve the corresponding parameters. The straightforward PyMC3 model structure is used to generate the unknown data as it is close to the statistical notation.  Firstly, the necessary components are imported from PyMC to build the required model. It is represented in the full format initially and then explained partly. The paper states, “Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement: with basic_model: This creates a context manager, with our basic model as the context, that includes all statements until the indented block ends.” This means that all the PyMC3 objects introduced in the indented code block below the with statements are added to the model behind the scenes. In the absence of this context manager idiom, users would be forced to manually associate each of the variables with the basic model immediately after we create them. Also, if a user tries to create a new random variable without a with model: statement, it will cause an error due to the absence of an obvious model for the variable to be added to.  Next, to obtain posterior estimates for the unknown variables in the model, the posterior estimates are calculated analytically. The researchers have explained two approaches to obtain posterior estimates, users can choose either of them depending on the structure of the model and the goals of the analysis. The first approach is called finding the maximum a posteriori (MAP) point using optimization methods and the second approach is computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods. For producing a posterior analysis of the required model, PyMC3 provides plotting and summarization functions for inspecting the sampling output.  A simple posterior plot can be created using traceplot. In the traceplot, the left column consists of the smoothed histogram while the right column contains the samples of the Markov chain plotted in sequential order. In addition, the summary function of PyMC3 also provides a text-based output of common posterior statistics. You can also learn more about the practical implementation of PyMC3 and its loss functions in the book ‘Bayesian Analysis with Python’ by Packt Publishing. How Facebook data scientists use Bayesian optimization for tuning their online systems How to perform exception handling in Python with ‘try, catch and finally’ Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet Netflix open-sources Metaflow, its Python framework for building and managing data science projects ActiveState adds thousands of curated Python packages to its platform
Read more
  • 0
  • 0
  • 4777

article-image-kaggles-rachel-tatman-on-what-to-do-when-applying-deep-learning-is-overkill
Vincy Davis
11 Dec 2019
8 min read
Save for later

Kaggle's Rachel Tatman on what to do when applying deep learning is overkill 

Vincy Davis
11 Dec 2019
8 min read
Deep learning, an emerging branch of machine learning, has garnered a lot of recognition in the field of technology over the last decade. It is regarded as a game-changer in AI, with distinct progress in computer vision, natural language processing (NLP), speech and other areas of machine learning. This year an Indeed survey found ‘deep learning engineer’ to be the best job in a tech position in the USA. Though deep learning has many benefits and a very appealing track record, not everybody can afford deep learning. It has some downsides like large data requirements, being excessively expensive, and has a high computing time. Below is a breakdown of Rachael Tatman’s talk “Put down the deep learning: When not to use neural networks and what to do instead” at the PyCon 2019 conference that delved into the problems with deep learning. Tatman is a data science advocate at Kaggle. Deep learning models require a very large amount of data in order to perform better than other techniques. Also, according to Tatman, just the compute of a simple image generation model in deep learning can cost around $60,000. This cost will increase with the complexity of the data models. It additionally requires expensive GPUs and hundreds of machines which will again deepen the cost to the user. Many less skilled people also find it difficult to adopt deep learning, as there is no standard theory available for learning about deep learning tools. The choice of a deep learning tool depends on the user’s knowledge of topology, training method, and other parameters. Next, deep learning also takes a lot of time for training large models. As the talk progresses, Tatman provides a list of three different types of models that can be used instead of deep learning. The three proposed models are regression-based models, tree-based models, and distance-based models.  The three proposed models instead of deep learning The most interpretable: Regression-based models The biggest advantage of a regression-based model is that it has a “well-principled” understanding of problems and provides many kinds of regression models, unlike deep learning. Users can simply work through the flowchart and decide on the best type of regression model for their data.  Some other advantages of regression models include its “fast to fit” feature. This means that it is much faster to fit when compared to a neural network, especially “if you're working with a well-optimized library the Python regression libraries tend to vary wildly so you might want to do a little bit of shopping around”. It also works well with small data as Tatman affirmed that she has worked on eight dozen data points. She added that since regression models are easy to interpret, she was able to learn many useful and interesting things from the data.  A few drawbacks of regression models are that a bit more data preparation is needed than for some other methods. They also require validation as regression models are based on strong assumptions about the distribution of the data points or the distribution of the errors.  Tatman also proclaimed that if she were to use a single machine learning model for the rest of her life, it would be a mixed-effects regression model. Mixed-effects models are extensions of linear regression models for data that are collected and summarized in groups. It is mainly used to determine the expected or mean values of the subject population. She believes, “you need to do a little bit more hands-on stuff, you need to do your validation, you probably need to do some additional data cleaning,” however, it only takes some time to do a lot of computing in less money and data. Want to know more about Regression? [box type="shadow" align="" class="" width=""]With so many benefits in regression-based models, you should definitely give Regression models a try. Read our book ‘Python Machine Learning By Example’ written by Yuxi (Hayden) Liu, to learn about regression algorithms and their evaluation. You can also master the art of building your own machine learning systems using other models such as Support Vector Machines and Text Analysis Algorithms with this example-based practical guide.[/box] The user-friendliest: Tree-based models  The next model which has the ability to replace deep learning models is called the tree-based models works that similar to a decision tree. It checks each node for a feature and depending on the value of that feature, the user can decide the path to be followed. When going down a particular path, it again checks for nodes with a feature. In this way, it works recursively to cut down a decision region into smaller chunks. Tatman also notified that developers generally opt for a forests model, instead of a tree-based model. A random forest is an ensemble model that combines many different decision trees together into a single model.  Per Tatman, “If you're in the machine learning community you might actually associate random forests with Kaggle and from 2010 to 2016, about two-thirds of all Kaggle competition winners used random forests.” On the other hand, “less than half use some form of deep learning, also random forests continue to do very well today.”  In the case of classification of data, random forests deliver better performance than logistic regression. It also does not need a lot of data cleaning or model validation. Random forests also do not require a user to convert the categorical variables, it simply undertakes the values and provides a corresponding output. It also supports many easy to use packages like XG boost, LightGBM, CatBoost, and others. In short, regression trees are the most user-friendly model, especially when doing classification. The drawbacks of trees/random forests are that they can easily overfit, it is also more sensitive to differences between datasets. It is less interpretable and requires more compute and training time when compared to regression models. Thus, tree-based models require little money but do need some data and time to train big data sets. The most lightweight: Distance-based models The last model, which according to Tatman can replace deep learning models is a common notation to group together a large group of methods like K-nearest neighbors, Gaussian Mixture models, and Support Vector machine. These models work with the basic idea that “points closer together to each other in a particular feature space are more likely to be in the same group.” The K-nearest neighbor model decides the value of a point based on the nearest majority neighbors. The Gaussian mixture models utilizes any distribution of distribution points that are a mixture of different Gaussians. The support vector model tries to be as far away from all the data points as possible. Distance-based models, particularly support vector models work very well with small data sets. They also tend to train 10 times faster than a regression model on the same data. In terms of accuracy, distance-based models lag behind other models, but in case of quick and dirty modeling, they perform better. They are good at data classification but are a little slower when compared to regression-based models. Consequently, distance-based models take very little time, requires very little money and are extremely lightweight. To conclude, Tatman says that the choice of one’s model should depend on the kind of time and money, the individual or organization possesses. Also, the most vital point to choose a model depends on its performance. Tatman adds, “based on empirical evidence right now it looks like deep learning will perform the best on a given data set given sufficient time money and compute.” Watch Tatman’s full talk for a detailed comparison of the three models. You can learn more about all the above machine learning models from our book, ‘Python Machine Learning By Example’ written by Yuxi (Hayden) Liu. The book will help you in implementing machine learning classification and regression algorithms from scratch in Python. Also, learn how to optimize the performance of a machine learning model for your application from our book. François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in Deep Learning, and more Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform Why use JVM (Java Virtual Machine) for deep learning Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting-edge AI models Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision.
Read more
  • 0
  • 0
  • 3799

article-image-teaching-gans-a-few-tricks-a-bird-is-a-bird-is-a-bird-robots-holding-on-to-things-and-bots-imitating-human-behavior
Savia Lobo
11 Dec 2019
7 min read
Save for later

Teaching GANs a few tricks: a bird is a bird is a bird, robots holding on to things and bots imitating human behavior

Savia Lobo
11 Dec 2019
7 min read
Generative adversarial networks (GANs) have been at the forefront of research on generative models in the last couple of years. GANs have been used for image generation, image processing, image synthesis from captions, image editing, visual domain adaptation, data generation for visual recognition, and many other applications, often leading to state of the art results. One of the tutorials titled, ‘Generative Adversarial Networks’ conducted at the CVPR 2018 (a Conference on Computer Vision and Pattern Recognition held at Salt Lake City, USA) provides a broad overview of generative adversarial networks and how GANs can be trained to perform different purposes.  The tutorial involved various speakers sharing basic concepts, best practices of the current state-of-the-art GAN including network architectures, objective functions, other training tricks, and much more. Let us look at how GANs are trained for different use cases.  There’s more to GANs….. If you further want to explore different examples of modern GAN implementations, including CycleGAN, simGAN, DCGAN, and 2D image to 3D model generation, you can explore the book, Generative Adversarial Networks Cookbook written by Josh Kalin. The recipes given in this cookbook will help you build on a common architecture in Python, TensorFlow and Keras to explore increasingly difficult GAN architectures in an easy-to-read format. Training GANs for object detection using Adversarial Learning Xialong Wang, from Carnegie Mellon University talked about object detection in computer vision as well as from the context of taking actions in robots. He also explained how to use adversarial learning for instances beyond image generation. To train a GAN, the key idea is to find the adversarial tasks for your target tasks to improve your target by fighting against these adversarial tasks. In computer vision if your target task is to recognize a bird using object detection, one adversarial task is adding occlusions by generating a mask to accrue the bird’s head and its leg which will make it difficult for the detector to recognize. The detector will further try to conquer these difficult tasks and from then on it will become robust to Occlusions. Another adversarial task for object detection can be Deformations. Here the image can be slightly rotated to make the detection difficult.  For training robots to grasp objects, one of the adversaries would be the Shaking test. If the robot arm is stable enough the object it grasps should not fall even with a rigourous shake. Another example is snatching. If another arm can snatch easily, it means it is not completely trained to resist snatching or stealing. Wang said the CMU research team tried generating images using DCGAN on the COCO dataset. However, the images generated could not assist in training the detector as the detectors could easily detect them as false images. Next, the team generated images using Conditional GANs on COCO but these didn’t help either. Hence, the team generated hard positive examples in feed by adding real world occlusions or real world deformations to challenge the detectors. He then talked about a Standard Fast R-CNN Detector which takes an image input in the convolutional neural network language model. After taking the input, the detector extracts features for the whole image, and later you can crop the features according to the proposal bounding box. These cropped features are resized to channel (C*6*6); here 6*6 is interred spatial dimensions. These features are the object features you want to focus on and can also use them to perform classification or regression for detections. The team has added a small network in the middle that would input the extracted features and generate a mask. The mask will assist which spatial locations to chop out certain features that would make it hard for the detectors to recognize. He also shared the benchmark results of the tests using different datasets like the AlexNet, VGG16, FRCN, and so on. The ASTN and the ASDN model showed improved output over the other networks.   Understanding Generative Adversarial Imitation Learning (GAIL) for training a machine to imitate human behaviours Stefano Ermon from Stanford University explained how to use Generative modeling ideas and GAN training to imitate human behaviours in complex environments.  A lot of progress in reinforcement learning has been made with successes in playing board games such as Chess, video games, and so on. However, Reinforcement Learning has one limitation. If you want to use it to solve a new task you have to specify a cost signal / a reward signal to provide some supervision to your reinforcement learning algorithm. You also need to specify what kind of behaviors are desirable and which are not.   In a game scenario the cost signal is whether you win or you lose. However, in further complex tasks like driving an autonomous vehicles to specify a cost signal becomes difficult as there are different objective functions like going off road, not moving above the speed limit, avoiding a road crash, and much more.  The simplest method one can use is Behavioural cloning where you can use your trajectories and your demonstrations to construct a training set of states with the corresponding action that the expert took in those states. You can further use your favorite supervised learning method classification or regression if the actions are continuous. However, this has some limitations: Small errors may compound over time as the learning algorithm will make certain mistakes initially and these mistakes will lead towards never seen before states or objects. It is like a Black box approach where every decision requires initial planning. Ermon suggests an alternative to imitation could be an Inverse RL (IRL) approachHe also demonstrates the similarities between RL and IRL. For the complete demonstration, you can check out the video.  The main difference between a GAIL and GANs is that in GANs the generator is taking inputs, random noise and maps them to the neural network producing some samples for the detector. However, in GAIL, the generator is more complex as it includes two components, a policy P which you can train and an environment (Black Box simulator) that can’t be controlled. What matters is the distribution over states and actions that you encounter when you navigate the environment using the policy that can be tuned. As the environment is difficult to control, training the GAIL model is harder than the simple GANs model. On the other hand, in a GANs model, training the policy is challenging such that the discriminator goes into the direction of fooling.  However, GAIL is the easier generative modelling task because you don’t have to learn the whole thing end to end and neither do you have to come up with a large neural network that maps noise into behaviours as some part of the input is given by the environment. But it is harder to train because you don't really know how the black box works. Ermon further explains how using Generative Adversarial Imitation Learning, one can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations with a method named as InfoGAN, a method, built on top of GAIL.   He also explained a new framework for multi-agent imitation learning for general Markov games by integrating multi-agent RL with a suitable extension of multi-agent inverse RL. This method will generalize Generative Adversarial Imitation Learning (GAIL) in the single agent case. This method will successfully imitate complex behaviors in high-dimensional environments with multiple cooperative or competing agents. To know more about further demonstrations on GAIL, InfoGAIL, and Multi-agent GAIL, watch the complete video on YouTube. Knowing the basics isn’t enough, putting them to practice is necessary. If you want to use GANs practically and experiment with them, Generative Adversarial Networks Cookbook by Josh Kalin is your go-to guide. With this cookbook, you will work with use cases involving DCGAN, Pix2Pix, and so on. To understand these complex applications, you will take different real-world data sets and put them to use. Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting edge AI- models Now there is a Deepfake that can animate your face with just your voice and a picture using temporal GANs Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 3234
article-image-questions-tensorflow-2-0-tf-prebuilt-binaries-tensorboard-keras-python-support
Sugandha Lahoti
10 Dec 2019
5 min read
Save for later

#AskTensorFlow: Twitterati ask questions on TensorFlow 2.0 - TF prebuilt binaries, Tensorboard, Keras, and Python support

Sugandha Lahoti
10 Dec 2019
5 min read
TensorFlow 2.0 was released recently with tighter integration with Keras, eager execution enabled by default, three times faster training performance, a cleaned-up API, and more.  TensorFlow 2.0 had a major API Cleanup. Many API symbols are removed or renamed for better consistency and clarity. It now enables eager execution by default which effectively means that your TensorFlow code runs like numpy code. Keras has been introduced as the main high-level API to enable developers to easily leverage Keras’ various model-building APIs. TensorFlow 2.0 also has the SavedModel API that allows you to save your trained Machine learning model into a language-neutral format.  In May, Paige Bailey, Product Manager (TensorFlow) and Laurence Moroney,  Developer Advocate at Google sat down to discuss frequently asked questions on TensorFlow 2.0. They talked about TensorFlow prebuilt binaries, the TF 2.0 upgrade script, Tensorflow Datasets, and Python support. Can I ask about any prebuilt binary for the RTX 2080 GPU on Ubuntu 16?  Prebuilt binaries for TensorFlow tend to be associated with a specific driver from Nvidia. If you're taking a look at any of the prebuilt binaries, take a look at what driver or what version of the driver you have supported on that specific card. It's easy for you to go to the driver vendor and download the latest version. But that may not be the one that TensorFlow is built for or the one that it supports. So, just make sure that they actually match each other.  Do my TensorFlow scripts work with TensorFlow 2.0?  Generally, TensorFlow scripts do not work with TensorFlow 2.0. But TensorFlow 2.0 has created an upgrade utility that is automatically downloaded with TensorFlow 2.0. For more information, you can check out this medium blog post that Paige and her colleague Anna created. It shows how you can upgrade script on an end file - any arbitrary Python file or even Jupyter Notebooks. It'll give you an export.txt file that shows you all of the symbol renames, the added keywords, and then some manual changes.  When will TensorFlow be supported in Python 3.7 and hence be accessed in Anaconda 3? TensorFlow has made the commitment that as of January 1, 2020, they no longer support Python 2. They are firmly committed to Python 3 and Python 3 support.  Is it possible to run Tensorboard on colabs? You can run Tensorboard on colabs and do different operations like smoothing, changing some of the values, and using the embedding visualizer directly from your collab notebook in order to understand accuracies and to be able to model performance debugging. You also don't have to specify ports which means you need not remember to have multiple tensor board instances running. Tensorboard automatically selects one that would be a good candidate.  How would you use [TensorFlow’s] feature_columns with Keras? TensorFlow's feature_columns API is quite useful for non-numerical feature processing. Feature columns are a way of getting your data efficiently into Estimators and you can use them in Keras. TensorFlow 2.0 also has a migration guide if you wanted to migrate your models from using Estimators to being more of a TensorFlow 2.0 format with Keras.   What are some simple data sets for testing and comparing different training methods for artificial neural networks? Are there any in TensorFlow 2.0? Although MNIST and Fashion-MNIST are great, TensorFlow 2.0 also has TensorFlow Datasets which provide a collection of datasets ready to use with TensorFlow. It handles downloading and preparing the data and constructing a tf.data. TensorFlow Datasets is compatible with both TensorFlow Eager mode and Graph mode. Also, you can use them with all of your deep learning and machine learning models with just a few lines of code.  What about all the web developers who are new to AI, how does TensorFlow 2.0 help them get started? With TensorFlow 2.0, the web models that you create using saved model can be deployed to TFLite, or TensorFlow.js. The Keras layers are also supported in TensorFlow.js, so it's not just for Python developers but also for JS developers or even R developers.  You can watch Paige and Lawrence answering more questions in this three-part video series available on YouTube. Some of the other  questions asked were: Is there any TensorFlow.js transfer learning example for object detection? Are you going to publish the updated version of TensorFlow from Poets tutorial from Pete Warden implementing TF2.0. TFLite 2.0 and NN-API for faster inference on Android devices equipped with NPU/DSP? Will the frozen graph generated from TF 1.x work on TF 2.0? Which is the preferred format for saving the model GOIU forward saved_model (SM) or hd5? What is the purpose of keeping Estimators and Keras as separate APIs?  If you want to quickly start with building machine learning projects with TensorFlow 2.0, read our book TensorFlow 2.0 Quick Start Guide by Tony Holdroyd. In this book, you will get acquainted with some new practices introduced in TensorFlow 2.0. You will also learn to train your own models for effective prediction, using high-level Keras API.  TensorFlow.js contributor Kai Sasaki on how TensorFlow.js eases web-based machine learning application development Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track. TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more! Brad Miro talks TensorFlow 2.0 features and how Google is using it internally
Read more
  • 0
  • 0
  • 2379

article-image-brad-miro-talks-tensorflow-2-0-features-and-how-google-is-using-it-internally
Sugandha Lahoti
10 Dec 2019
6 min read
Save for later

Brad Miro talks TensorFlow 2.0 features and how Google is using it internally

Sugandha Lahoti
10 Dec 2019
6 min read
TensorFlow 2.0, released in October, has got developers excited about a myriad of features and its ease of use.  At the EuroPython Conference 2019, Brad Miro, developer programs engineer at Google talked about the updates being made to TensorFlow 2.0. He also gave an overview of how Google is using TensorFlow, moving on to why Python is important for TensorFlow development and how to migrate from TF 1.x to TF 2.0. EuroPython is one of the most popular Python programming language community conferences. Below are some highlights from Brad’s talk at EuroPython. What is TensorFlow? TensorFlow, an open-source deep learning library developed at Google, first released in 2015. It’s a Python framework that includes a number of utilities for helping you write deep neural networks supporting both GPUs and TPUs. A lot of deep learning involves using mathematics, statistics, and algebra and perform low-level optimizations with your system. TensorFlow removes a lot of those abstractions leaving you to focus on actually writing your model. How TensorFlow is used internally at Google Tensorflow is used internally at Google to power all of its machine learning and AI. Google’s data centers are powered using AI and TensorFlow to help optimize the usage of these data centers to reduce bandwidth, to ensure network connections are optimized, and to reduce power consumption. TensorFlow also is useful for performing global localization in Google Maps. It is also used heavily in the Google Pixel range of smartphones to help optimize the software. These technologies are also used in medical research specifically in the field of Computer Vision. For example, Tensorflow is used to distinguish between the retinal image of a healthy eye from the retinal image of an eye that has diabetic retinopathy.   Further Learning If you want to learn to build more computer vision applications with TensorFlow 2.0, check out the book Hands-On Computer Vision with TensorFlow 2 by Benjamin Planche, and Eliot Andres. This book by Packt Publishing is a practical guide to building high-performance systems for object detection, segmentation, video processing, smartphone applications, and more. By the end of the book, you will have both the theoretical understanding and practical skills to solve advanced computer vision problems with TensorFlow 2.0. Furthermore, Google is using AI and TensorFlow to predict whether or not objects in space are planets. To summarize, they use AI to predict whether or not fluctuations in the brightness of an object is due to it being a planet.   Why Python is so important for TensorFlow Python has always been the choice for TensorFlow due to the language being extremely easy to use and having a rich ecosystem for data science including tools such as numpy, scikit-learn, and pandas. When TensorFlow was being built, the idea was that it should have the simplicity of numpy, performance of C but ease of use of Python.  What does TensorFlow 2.0 bring to the table TensorFlow 2.0 is powerful, flexible, scalable and easily deployable.  What’s gone Session.run tf.control_dependencies tf.global_variables_initializer tf.cond, tf.while_loop Tf.contrib What’s new Eager execution enabled by default  tf.function Keras as main high level API Distribution Strategy API SavedModel API  TensorFlow 2.0 had a major API Cleanup. Many API symbols are removed or renamed for better consistency and clarity.  Session.run has been replaced with eager execution which effectively means that your tensorflow code runs like numpy code.  Eager execution enables fast iteration and intuitive debugging without building a graph. It also makes creating and experimenting with models using TensorFlow easier. It can be especially useful when using the tf.keras model subclassing API. TensorFlow 2.0 has tf.function, a python decorator that lets you run regular Python code which is later compiled down to TensorFlow code using AutoGraph. The Distribution Strategy API in TensorFlow 2.0 allows machine learning researchers to distribute training across a wide variety of compute configurations. This release also allows distributed training with Keras’ model.fit and custom training loops. Keras is introduced as the main high-level API. Keras is a popular high-level API used for easy and fast prototyping, building, and training of deep learning models. This will enable developers to easily leverage their various model-building APIs. Using Keras with TensorFlow has two main methods.  Symbolic (Keras sequential) Your model is a graph of layers Any graph you compile will run  TensorFlow helps you debug by catching errors at compile time Imperative method (Keras subclassing) Your model is Python bytecode Complete flexibility and control  Harder to debug/ Harder to maintain  There are pros and cons of using each method; it really just depends on what your specific use cases are. The SavedModel API allows you to save your trained ML model into a language-neutral format. With TensorFlow 2.0, all TensorFlow ecosystem projects including TensorFlow Lite, TensorFlow JS, TensorFlow Serving, and TensorFlow Hub, support SavedModels. On Tensorflow Hub, you can store and download pre-built models. You can use TensorFlow Extended which is a Python library that can be run on your servers to productionalize your models. TensorFlow Lite lets you run your TensorFlow models on edge devices. With TensorFlow.js, you can run machine learning models using javascript in the browser or run them on servers using node. TensorFlow also has Swift for TensorFlow to help developers use Swift to develop machine learning models. “Swift for TensorFlow provides a new programming model that combines the performance of graphs with the flexibility and expressivity of Eager execution, with a strong focus on improved usability at every level of the stack. This is not just a TensorFlow API wrapper written in Swift — we added compiler and language enhancements to Swift to provide a first-class user experience for machine learning developers.”  Other packages that exist in the TensorFlow ecosystem used for niche use cases are TF Probability, TF Agents (reinforcement learning), Tensor2Tensor, TF Ranking, TF Text (natural language processing), TF Federated, TF privacy and more.  How to upgrade from TensorFlow 1.x to TensorFlow 2.0 There are several migration guides available on TensorFlow’s website. You can also use the tf.compat.v1 library for backwards compatibility and the tf_upgrade_v2 script which you can execute on top of any Python script to convert TF 1.x code to 2.0 code. You can also read more about TF 2.0 migration in our book Hands-On Computer Vision with TensorFlow 2 which introduces the automatic migration tool  and compares TensorFlow 1 concepts with their TensorFlow 2 counterparts with a detailed guide on migrating to idiomatic TensorFlow 2 code. You can watch Brad’s full talk on YouTube. This video is licensed under the CC BY-NC-SA 3.0 license.  TensorFlow.js contributor Kai Sasaki on how TensorFlow.js eases web-based machine learning application development Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track. TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more!
Read more
  • 0
  • 0
  • 4329