Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

121 Articles
article-image-mastering-machine-learning-best-practices-and-the-future-of-generative-ai-for-software-engineers
Miroslaw Staron
25 Oct 2024
10 min read
Save for later

Mastering Machine Learning: Best Practices and the Future of Generative AI for Software Engineers

Miroslaw Staron
25 Oct 2024
10 min read
IntroductionThe field of machine learning (ML) and generative AI has rapidly evolved from its foundational concepts, such as Alan Turing's pioneering work on intelligence, to the sophisticated models and applications we see today. While Turing’s ideas centered on defining and detecting intelligence, modern applications stretch the definition and utility of intelligence in the realm of artificial neural networks, language models, and generative adversarial networks. For software engineers, this evolution presents both opportunities and challenges, from creating intelligent models to harnessing tools that streamline development and deployment processes. This article explores the best practices in machine learning, insights on deploying generative AI in real-world applications, and the emerging tools that software engineers can utilize to maximize efficiency and innovation.Exploring Machine Learning and Generative AI: From Turing’s Legacy to Today's Best Practices When Alan Turing developed his famous Turing test for intelligence, computers, and software were completely different from what we are used to now. I’m certain that Turing did not think about Large Language Models (LLMs), Generative AI (GenAI), Generative Adversarial Networks, or Diffusers. Yet, this test for intelligence is equally useful today as it was at the time when it was developed. Perhaps our understanding of intelligence has evolved since then. We consider intelligence on different levels, for example, at the philosophical level and the computer science level. At the philosophical level, we still try to understand what intelligence really is, how to measure it, and how to replicate it. At the computer science level, we develop new algorithms that can tackle increasingly complex problems, utilize increasingly complex datasets, and provide more complex output. In the following figure, we can see two different solutions to the same problem. On the left-hand side, the solution to the Fibonacci problem uses good old-fashioned programming where the programmer translates the solution into a program. On the right-hand side, we see a machine learning solution – the programmer provides example data and uses an algorithm to find the pattern just to replicate it later.   Figure 1. Fibonacci problem solved with a traditional algorithm (left-hand side) and machine learning’s linear regression (right-hand side). Although the traditional way is slow, it can be mathematically proven to be correct for all numbers, whereas the machine learning algorithm is fast, but we do not know if it renders correct results for all numbers. Although the above is a simple example, it illustrates that the difference between a model and an algorithm is not that great. Essentially, the machine learning model on the right is a complex function that takes an input and produces an output. The same is true for the generative AI models.  Generative AI Generative AI is much more complex than the algorithms used for Fibonacci, but it works in the same way – based on the data it creates new output. Instead of predicting the next Fibonacci number, LLMs predict the next token, and diffusers predict values of new pixels. Whether that is intelligence, I am not qualified to judge. What I am qualified to say is how to use these kinds of models in modern software engineering.  When I wrote the book Machine Learning Infrastructure and Best Practices for Software Engineers1, we could see how powerful ChatGPT 3.5 is. In my profession, software engineers use it to write programs, debug them and even to improve the performance of the programs. I call it being a superprogrammer. Suddenly, when software engineers get these tools, they become team leader for their bots, who support them – these bots are the copilots for the software engineers. But using these tools and models is just the beginning.  Harnessing NPUs and Mini LLMs for Efficient AI Deployment Neural Processing Units (NPUs) have started to become more popular in modern computers, which addresses the challenges with running language models locally, without the access to internet. The local execution reduces latency and reduces security risks of hijacking information when it is sent between the model and the client. However, the NPUs are significantly less powerful than data centers, and therefore we can only use them with small language models – so-called mini-LLMs. An example of such a model is Phi-3-mini model developed by Microsoft2. In addition to NPUs, frameworks like ONNX appeared, which made it possible to quickly interchange models between GPUs and NPUs – you could train the model on a powerful GPU and use it on a small NPU thanks to these frameworks.  Since AI take so much space in modern hardware and software, GeekbenchAI3 is a benchmark suite that allows us to quantify and compare AI capabilities of modern hardware. I strongly recommend to take it for a spin to check what we can do with the hardware that we have at hands. Now, hardware is only as good as the software, and there, we also saw a lot of important improvements.  Ollama and LLM frameworks In my book, I presented the methods and tools to work with generative AI (as well as the classical ML). It’s a solid foundation for designing, developing, testing and deploying AI systems. However, if we want to utilize LLMs without the hassle of setting up the entire environment, we can use frameworks like Ollama4. The Ollama framework seamlessly downloads and deploys LLMs on a local machine if we have enough resources. Once installing the framework, we can type ollama run phi-3 to start a conversation with the model. The framework provides a set of user interfaces, web services and other types of mechanisms needed to construct a fully-fledged machine learning software5.  We can use it locally for all kinds of tasks, e.g., in finance6 . What’s Next: Embracing the Future of AI in Software Engineering As generative AI continues to evolve, its role in software engineering is set to expand in exciting ways. Here are key trends and opportunities that software engineers should focus on to stay ahead of the curve: Mastering AI-Driven Automation: AI will increasingly take over repetitive programming and testing tasks, allowing engineers to focus on more creative and complex problems. Engineers should leverage AI tools like GitHub Copilot and Ollama to automate mundane tasks such as bug fixing, code refactoring, and even performance optimization. Actionable Step: Start integrating AI-driven tools into your development workflow. Experiment with automating unit tests, continuous integration pipelines, or even deployment processes using AI. AI-Enhanced Collaboration: Collaboration with AI systems, or "AI copilots," will be a crucial skill. The future of software engineering will involve not just individual developers using AI tools but entire teams working alongside AI agents that facilitate communication, project management, and code integration. Actionable Step: Learn to delegate tasks to AI copilots and explore collaborative platforms that integrate AI to streamline team efforts. Tools like Microsoft Teams and Github Copilot integrated with AI assistants are a good start. On-device AI and Edge Computing: The rise of NPUs and mini-LLMs signals a shift towards on-device AI processing. This opens opportunities for real-time AI applications in areas with limited connectivity or stringent privacy requirements. Software engineers should explore how to optimize and deploy AI models on edge devices. Actionable Step: Experiment with deploying AI models on edge devices using frameworks like ONNX and test how well they perform on NPUs or embedded systems. To stay competitive and relevant, software engineers need to continuously adapt by learning new AI technologies, refining their workflows with AI assistance, and staying attuned to emerging ethical challenges. Whether by mastering AI automation, optimizing edge deployments, or championing ethical practices, the future belongs to those who embrace AI as both a powerful tool and a collaborative partner. For software engineers ready to dive deeper into the transformative world of machine learning and generative AI, Machine Learning Infrastructure and Best Practices for Software Engineers offers a comprehensive guide packed with practical insights, best practices, and hands-on techniques.ConclusionAs generative AI technologies continue to advance, software engineers are at the forefront of a new era of intelligent and automated development. By understanding and implementing best practices, engineers can leverage these tools to streamline workflows, enhance collaborative capabilities, and push the boundaries of what is possible in software development. Emerging hardware solutions like NPUs, edge computing capabilities, and advanced frameworks are opening new pathways for deploying efficient AI solutions. To remain competitive and innovative, software engineers must adapt to these evolving technologies, integrating AI-driven automation and collaboration into their practices and embracing the future with curiosity and responsibility. This journey not only enhances technical skills but also invites engineers to become leaders in shaping the responsible and creative applications of AI in software engineering.Author BioMiroslaw Staron is a professor of Applied IT at the University of Gothenburg in Sweden with a focus on empirical software engineering, measurement, and machine learning. He is currently editor-in-chief of Information and Software Technology and co-editor of the regular Practitioner’s Digest column of IEEE Software. He has authored books on automotive software architectures, software measurement, and action research. He also leads several projects in AI for software engineering and leads an AI and digitalization theme at Software Center. He has written over 200 journal and conference articles.
Read more
  • 0
  • 0
  • 1535

article-image-solving-scalability-challenges-in-modern-system-design-from-web-apps-to-genai
Tejas Chopra, Dhirendra Sinha
23 Oct 2024
10 min read
Save for later

Solving Scalability Challenges in Modern System Design: From Web Apps to GenAI

Tejas Chopra, Dhirendra Sinha
23 Oct 2024
10 min read
IntroductionIn today’s digital landscape, scalability isn’t just a buzzword—it’s a crucial determinant of success. As the complexity and user base of applications grow, so do the challenges in designing systems that can efficiently handle massive loads. This ongoing challenge of scalability was a key inspiration for my recent book, “System Design Guide for Software Professionals: Build scalable solutions – from fundamental concepts to cracking top tech company interviews” The Scalability Crisis Consider a scenario where a startup’s web application goes viral, resulting in a massive influx of users. This should be a cause for celebration, but instead, it becomes a nightmare as the application starts to slow down significantly. According to a 2024 report by Ably, nearly 85% of companies that experience sudden user growth face significant performance issues due to scalability challenges. The root cause often lies in early design decisions, where the rush to market overshadows the need to build for scale. The building Blocks Approach Over the years, I've found that the "building blocks" approach to system design is crucial for building scalable systems. This method leverages established patterns and components to improve scalability. Here are some of the key building blocks discussed in my book: Distributed Caching: A report from Ahex shows that implementing distributed caching systems like Redis or Memcached can reduce database load by up to 60%, significantly speeding up read operations. Load Balancing: Modern load balancers are more than just traffic directors; they are intelligent systems that optimize resource utilization. A 2024 NGINX report revealed that effective load balancing can improve server efficiency by 40%, enhancing performance during peak loads. Database Sharding: As data grows, a single database becomes a bottleneck. Sharding allows horizontal scaling, and companies that implemented it have seen up to a 5x increase in database throughput, as noted in a Google Cloud study. Message Queues: Asynchronous processing with message queues like Kafka or RabbitMQ can decouple system components and manage traffic spikes. A Gartner report found that this can lead to a 30% reduction in latency during peak usage times. Content Delivery Networks (CDNs): For global applications, CDNs are essential. According to Cloudflare, CDNs can reduce load times by 50-70% for users across different regions, significantly improving user experience. Real-World Application: Scaling a Hypothetical E-commerce Platform Consider an e-commerce platform initially designed as a monolithic application with a single database. This setup worked well for the first 100,000 users, but performance issues began to surface as the user base grew to a million. Approach: Microservices Architecture: Decomposing the monolith into microservices allows independent scaling of each component. Amazon famously adopted this approach, enabling it to handle billions of requests daily. Distributed Caching: Implementing a distributed cache reduced database queries by 70%, as seen in an Akamai case study. Database Sharding: Sharding the database improved query performance by 80%, according to data from MongoDB. Message Queues: Using message queues for resource-intensive tasks led to a 25% reduction in system load, as per RabbitMQ's benchmarks. CDN Deployment: Deploying a global CDN reduced page load times from 3.5 seconds to under 1 second, similar to the optimizations reported by Shopify. Example Metrics: Before optimization: The average page load time was 3.5 seconds, with 30% of requests exceeding 5 seconds during peak hours. After optimization: Reduced to 800ms, with 99% of requests completing under 2 seconds, even during Black Friday. Database query volume: Reduced by 65% through effective caching strategies. Infrastructure costs: Reduced by 40% while handling 5x more daily active users. The AI/ML Twist: Scaling GenAI Infrastructure Scaling infrastructure for Generative AI (GenAI) presents unique challenges. For instance, consider a startup offering a GenAI service for content creation. Initially, 10 high-end GPUs served 1,000 daily users, processing about 1 million tokens daily. However, rapid growth led to the processing of 500 million tokens per day for 100,000 users. Challenges: GPU Scaling: GPU scaling requires managing expensive, specialized hardware. A BCG report notes that effective GPU utilization can save companies up to 50% in infrastructure costs. Token Economy: The varying token loads in GenAI apps pose significant challenges. Stanford University says token loads can vary dramatically, complicating resource prediction. Cost Management: Cloud GPU instances can cost over $10,000/month. AWS reports that optimized GPU management strategies can reduce costs by 30%. Latency Expectations: Users expect near-instant responses. A study by OpenAI found that sub-second latencies are critical for real-time applications. Solutions: Dynamic GPU Allocation: Implementing dynamic GPU allocation can reduce idle times and costs, as observed by Google Cloud. Request Batching: Grouping user requests can improve GPU throughput by 20%, according to Azure AI. Model Optimization: Techniques like quantization and pruning can reduce model size by 70% and increase inference speed by 50%, as highlighted in MIT’s research. Tiered Service Levels: Offering different response time guarantees can optimize resource allocation, as shown by Microsoft Azure. Distributed Inference: Splitting models across GPUs or using CPU inference can reduce GPU load by 40%, based on Google AI's findings. Example Metrics: Cost per 1000 tokens: Reduced from $0.05 to $0.015 through optimized GPU management. p99 Latency: Improved from 5 seconds to 1.2 seconds. Infrastructure scaling: Handled 1 billion daily tokens with only a 20x increase in costs, compared to the 100x increase projected by traditional scaling methods. Beyond Technology: The Human Factor While technology is critical, fostering a culture of scalability is equally important. A Harvard Business Review article emphasized that companies prioritizing scalable culture from the start are 50% more likely to sustain growth without operational roadblocks. Strategies: Encourage developers to consider scalability from the outset. Invest in monitoring and observability tools to detect issues early. Regularly conduct load tests and capacity planning. Adopt a DevOps culture to break down silos between development and operations. The Road Ahead As we move forward, innovations in edge computing, serverless architectures, and large-scale machine learning will continue to push the boundaries of scalability. However, the foundational principles of scalable system design—modularity, redundancy, and efficient resource utilization—remain vital. By mastering these principles, you can build systems that grow and adapt to an ever-changing digital landscape, whether you’re scaling a web application or pioneering generative AI technologies. Remember, scalability is not a destination but a journey, and having the right building blocks makes all the difference. 
Read more
  • 0
  • 0
  • 1141

article-image-creating-custom-tools-in-unity-automating-repetitive-tasks
Mohamed Essam
17 Oct 2024
5 min read
Save for later

Creating Custom Tools in Unity: Automating Repetitive Tasks

Mohamed Essam
17 Oct 2024
5 min read
Introduction In the fast-paced world of game development, efficiency is key. Developers often find themselves repeating mundane tasks that can consume valuable time and lead to human error. Imagine if you could automate these repetitive tasks, freeing up your time to focus on more creative aspects of your game. In this article, we’ll explore how creating custom tools in Unity can transform your workflow, boost productivity, and reduce the risk of mistakes. Why Custom Tools Matter Custom tools are tailored solutions that address specific needs in your development process. Here’s why they are crucial: Efficiency: Automate routine tasks to save time and reduce manual effort. Consistency: Ensure that repetitive tasks are executed uniformly across your project. Error Reduction: Minimize human errors by automating processes that are prone to mistakes. Focus on Creativity: Spend more time on innovative aspects of your game rather than getting bogged down with repetitive tasks. Creating Your First Custom Tool Overview: We’ll walk through the process of creating a simple Unity editor tool to automate tasks like aligning game objects or batch renaming assets. 1. Setting Up Your Editor Window Define the Purpose: Clearly outline what your tool will accomplish. Create a New Editor Window: Use Unity’s editor scripting API to create a custom window. Add Basic UI Elements: Incorporate buttons, sliders, or input fields to interact with the tool. 2. Implementing Core Functionality Aligning Game Objects: Write scripts to align selected game objects in the scene. Batch Renaming Assets: Create a script that renames multiple assets based on a naming convention. 3. Tips for Effective Custom Tools Start Simple: Begin with a basic tool and gradually add complexity as needed. Prioritize Usability: Ensure your tool is intuitive and easy to use, even for developers who may not be familiar with the script. Document Your Code: Include comments and documentation to make future updates easier. How Custom Tools Solve Common Issues Custom tools address several common development challenges: Repetitive Tasks: Automate repetitive processes like object alignment or asset management to streamline your workflow. Consistency Issues: Ensure that tasks are performed uniformly across your project, avoiding discrepancies and errors. Time Management: Free up time for more complex and creative aspects of your game development by automating mundane tasks. Let's dive into the hands-on section  We've all encountered broken game objects in our scenes, and manually searching through every object to find missing script references can be tedious and time-consuming. One of the key advantages of editor scripts is the ability to create a tool that automatically scans all game objects and pinpoints exactly where the issues are.  Script Dependency Checker . To use this script, simply place it in the Editor folder within your Assets directory. The script needs to be in the Editor directory to function properly. Here's the code that creates a new menu item, which you'll find in the Editor menu bar. What this script does is invoke the CheckDependencies method, which scans all game objects in the scene, checks for any missing components, and collects them in a list. The results are then displayed through the editor window using the OnGUI function. public class ScriptDependencyChecker : EditorWindow {    private static Vector2 scrollPosition;    private static string[] missingScripts = new string[0];    [MenuItem("Tools/Script Dependency Checker")]    public static void ShowWindow()    {        GetWindow<ScriptDependencyChecker>("Script Dependency Checker");    }    private void OnGUI()    {        if (GUILayout.Button("Check Script Dependencies"))        {            CheckDependencies();        }        if (missingScripts.Length > 0)        {            EditorGUILayout.LabelField("Objects with Missing Scripts:", EditorStyles.boldLabel);            scrollPosition = EditorGUILayout.BeginScrollView(scrollPosition, GUILayout.Height(300));            foreach (var entry in missingScripts)            {                EditorGUILayout.LabelField(entry);            }            EditorGUILayout.EndScrollView();        }        else        {            EditorGUILayout.LabelField("No missing scripts found.");        }    }    private void CheckDependencies()    {        var missingList = new System.Collections.Generic.List<string>();        GameObject[] allObjects = GameObject.FindObjectsOfType<GameObject>();        foreach (var obj in allObjects)        {            var components = obj.GetComponents<Component>();            foreach (var component in components)            {                if (component == null)                {                    missingList.Add($"Missing script on GameObject: {obj.name}");                }            }        }        missingScripts = missingList.ToArray();    } } Now, let's head over to the Unity Editor and start using this tool. As shown in Image 01, you'll find the Script Dependency Checker under Tools | Script Dependency Checker in the menu bar.  Image 01 - Unity’s menu bar When you click on it, a window will open with a button and a debug section that will display any game objects with missing script references, if found. You can see this in Image 02.  Image 02 - Script dependency window After pressing the button, we discovered a game object named AudioManager with a missing script, as shown in Image 03.  Image 03 - Results of the Checker Next, we can search for AudioManager in the hierarchy and address the issue by either reassigning the missing script or removing it entirely if it's no longer needed, as shown in Image 04.   Image 04 - Game Object with missing script Learning More Explore Unity Documentation: Unity’s official documentation provides comprehensive guides on editor scripting. Join Developer Communities: Engage with forums and communities like the Unity Developer Community or Stack Overflow to exchange ideas and get support. Experiment with Examples: Study and modify existing tools to understand their functionality and apply similar concepts to your projects. Conclusion Creating custom tools in Unity not only enhances your productivity but also ensures a smoother and more efficient development process. As you experiment with building and implementing your tools, consider other repetitive tasks that could benefit from automation. Whether it’s organizing project folders or generating procedural content, the possibilities are endless. By leveraging custom tools, you’ll gain more control over your development environment and focus on what truly matters—bringing your game to life. For more insights into Unity development and custom tools, check out my book, Mastering Unity Game Development with C#: Harness the Full Potential of Unity 2022 Game Development Using C#, where you’ll find in-depth guides and practical examples to further enhance your game development skills.  Author BioMohamed Essam is a highly skilled Unity developer with expertise in creating captivating gameplay experiences across various platforms. With a solid background in game development spanning over four years, he has successfully designed and implemented engaging gameplay mechanics for mobile devices and other platforms. His current focus lies in the development of a highly popular multiplayer game, boasting an impressive 20 million downloads. Equipped with a deep understanding of cutting-edge technologies and a knack for creative problem solving, Mohamed Essam consistently delivers exceptional results in his projects.
Read more
  • 0
  • 0
  • 1200

article-image-unlocking-excels-potential-extend-your-spreadsheets-with-r-and-python
Steven Sanderson, David Kun
17 Oct 2024
5 min read
Save for later

Unlocking Excel's Potential: Extend Your Spreadsheets with R and Python

Steven Sanderson, David Kun
17 Oct 2024
5 min read
Introduction Are you an Excel user looking to push your data analysis capabilities beyond the familiar cells and formulas? If so, you're about to embark on a transformative journey. With the integration of R and Python, you can elevate Excel into a powerhouse of advanced data analysis and visualization. In this blog post, inspired by the book "Extending Excel with Python and R," co-authored by myself and David Kun, we will dive deep into practical implementation, focusing on how to automate data visualization in Excel using these powerful programming languages. Practical Implementation: Creating Advanced Data Visualizations In the world of data analysis, visual representation is key to understanding complex datasets. Excel, while equipped with basic charting tools, often requires enhancement for more sophisticated visuals. By integrating R and Python, you can create dynamic and detailed graphs that bring your data to life. Task: Automating Data Visualization with Python and R Step-by-Step Guide Step 1: Set Up Your Environment Before jumping into visualization, ensure you have the necessary tools installed. You will need: Excel: Ensure you have a version that supports VBA (Visual Basic for Applications). Python: Install Python on your computer. You can download it from the official Python website. R: Similarly, install R from the Comprehensive R Archive Network (CRAN). Libraries: For Python, install `pandas`, `matplotlib`, and `openpyxl` using pip. For R, install `ggplot2` and `readxl`.  Step 2: Importing Data Begin by importing your Excel data into Python or R. Here’s a Python snippet using pandas:  In R, use readxl:  Step 3: Creating Visualizations Python Example Using Matplotlib, you can create a simple line plot: Python Example   R Example With ggplot2, the process is equally straightforward where df is some data frame loaded in:  Step 4: Integrating Visualizations into Excel Once your visualization is created, the next step is to integrate it back into Excel. This can be done manually, or you can automate it using VBA or an API endpoint. Python Integration Using openpyxl, you can embed images:   R Integration For R, you might automate this process using R scripts that interact with Excel via VBA or other packages like `officer`.  Step 5: Automating the Entire Workflow To automate, consider using Python scripts executed from Excel VBA or R scripts called through Excel's RExcel plugin. This way, you can refresh data and update visualizations with minimal effort. Conclusion By integrating R and Python with Excel, you unlock a realm of possibilities for data visualization and analysis, turning Excel from a simple spreadsheet tool into a comprehensive data analytics suite. This guide provides a snapshot of what you can achieve, and with further exploration, the potential is limitless. Author Bio Steven Sanderson is a Manager of Applications with a deep passion for data and its compliments: cleaning, analysis, visualization and communication. He is known primarily for his work in R. After his MPH, Steven continued his work in the healthcare industry as a clinical decision support analyst working his way up to Manager of Applications at Stony Brook Medicine for Patient Financial Services. He currently is focused on expanding functions in his healthyverse suite of packages while also slimming them down and expanding their robustness. He also now enjoys helping mentor junior employees to set them up for success. David Kun is a mathematician and actuary who has always worked in the gray zone between quantitative teams and ICT, aiming to build a bridge. He is a co-founder and director of Functional Analytics, the creator of the ownR infinity platform. As a data scientist, he also uses ownR for his daily work. His projects include time series analysis for demand forecasting, computer vision for design automation, and visualization. Looking to Master Excel with Python and R?If you're excited about extending Excel’s capabilities with powerful tools like Python and R, Extending Excel with Python and R, authored by Steven Sanderson, David Kun, offers an in-depth guide to seamlessly integrating these languages into your Excel workflow. It covers everything from automating data tasks to advanced visualizations, all tailored for Excel enthusiasts.
Read more
  • 0
  • 0
  • 493

article-image-learn-transformers-for-natural-language-processing-with-denis-rothman
Expert Network
31 Aug 2021
7 min read
Save for later

Learn Transformers for Natural Language Processing with Denis Rothman

Expert Network
31 Aug 2021
7 min read
Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN and CNN models in use today. Artificial intelligence is simply a recent form of automation, just like all other automation. AI consultants will always be necessary to implement AI. Understand transformers from a cognitive science perspective with the book Transformers for Natural Language Processing. The transformer architecture is both revolutionary and disruptive making it the hottest Algorithm in AI. It is a game-changer for Natural Language Understanding (NLU), a subset of Natural Language Processing (NLP), which has become one of the pillars of artificial intelligence in a global digital economy.​ Transformers can outperform the classical RNN and CNN models in use today. We interviewed artificial intelligence expert Denis Rothman about transformers, it's advancement in artificial intelligence & NLP, and his recent book Transformers for Natural Language Processing. What's the significance of AI language understanding in the tech world today and what role do transformers play in it? Artificial intelligence-driven language understanding is expanding exponentially. It has become the pillar of language modeling, chatbots, personal assistants, question answering, text summarizing, speech-to-text, sentiment analysis, machine translation, and more. The Transformer, introduced by Google, provides novel approaches to language understanding through a novel self-attention architecture. OpenAI offers transformer technology, and Facebook's AI Research department provides high-quality datasets. Overall, the Internet giants have made transformers available to all, as you will discover in my book. The transformer architecture is both revolutionary and disruptive. The Transformer and subsequent transformer architectures and models are revolutionary because they changed the way we think of NLP and artificial intelligence itself. The architecture of the Transformer is not an evolution. It breaks with the past, leaving RNNs and CNNs behind. It takes us closer to seamless machine intelligence that will match human intelligence in the years to come. What should deep learning & NLP practitioners keep in mind while starting their career with transformers? The world of artificial intelligence is undergoing an exponential evolution in NLP due to the amount of data available. As this evolution expands to all domains, new abilities are required. NLP will not just be about downloading a model and getting to work in terms of software. You will have to analyze the quality of what a transformer model produces to fine-tune it. In turn, to analyze NLP properly, a minimum knowledge in linguistics will become mandatory. Linguistics will enable you to understand the building blocks and structure of a language. Grammar will increase your ability to analyze the output of a transformer.  Otherwise, your team will have to hire a linguist, which will increase the project's cost and threaten the Return On Investment(ROI) of the team. What are some future advancements that you anticipate in transformers and NLP? Transformers have wiped RNNs off the map at this point. They represent the industrialization of artificial intelligence. As artificial intelligence, transformers are taking AI from the hype to an industrial level. Unlike traditional deep learning models, transformers contain optimized layers for GPUs and CPUs. In the future, creating NLP models will require machine architecture awareness. Machine performance will be the key to more efficient models. Not everybody can purchase or rent a supercomputer to train a model. Learning how to design tailored transformer models based on optimized datasets will become mandatory to face competition. What are some of the popular myths around transformers prevalent in the tech market? Many people believe that transformers can perform all NLP tasks with a model such as GPT-3. Nothing can be further from the truth. Google, Microsoft, Facebook, and Amazon, for example, need data for their everyday business and powerful NLP transformer models to analyze the billions of words coming in every day. However, the tasks are limited to their marketing usage. If you need to implement a transformer in a specific area, you will have to build datasets. You will also have to build pipelines with classical algorithms and queries to process the data, the inputs, and manage the outputs. In real-life, that means that artificial intelligence is only a component in a long chain of classical algorithms and processes. How was your experience building one of the very first word2matrix embedding solutions? In the early 1980s, I managed a company with many students who wanted to learn a language. I had a choice. Increase the number of teachers or automate vast portions of the process. I decided to go for automation. Any intelligent system requires calculations. I found that converting words and word pieces into numbers was far more efficient than directly analyzing the words. I thus create a word2vector system, patented it in 1982, wrote a textbook, and implemented it in our company. Students began to take specific courses independently in our lab without a teacher. I then went further in the next few years, writing one of the first Cognitive NLP Chatbots with was successfully implemented for an industrial amount of students. Being the author of three cutting-edge AI solutions, what is your take on the shrinkage of job opportunities due to AI? Automation began centuries ago with water mills, windmills, textile machines, locomotives, and more recently, motorized personal vehicles in the early 20th century. Tractors replaced millions of jobs in the fields. Services are no exception. In the 1950s, hundreds of thousands of tellers, actual humans, worked in banks around the world. Today everybody goes to an ATM. ATM stands for Automated Teller Machine(ATM). “Automated teller,” says it all. A person performing a service was automated. Software is the automation of human tasks from the beginning, from accounting to stock market management and thousands of tasks. Artificial intelligence is simply a recent form of automation, just like all other automation. AI cannot replace traditional mathematics in physics. The calculation of differential equations driving rockets and satellites requires classical software precision, not artificial intelligence. AI is only a component of automation, like when cars replaced horses and all of the jobs that went with horse-driven transportation. AI will not replace everything because AI is useless in many fields. AI consultants will always be necessary to implement AI. Why has Python become the most suitable language for natural language processing? It’s important not to confuse the concepts of “most used” and “most suitable.” Python is a great intuitive language to learn AI and NLP. But it’s not a prerequisite. Python is easy to use and run, making it the shortest path, at this point, to take to learn AI. But do not be mistaken. C++ skills will also be required in large real-life projects, for example. My advice. Learn AI with Python at full speed. Do some implementations with Python. But learn other languages such as C++, Java, and more. Real-life pipelines require classical processes and algorithms, not only AI. In some projects, C++ will boost performances, for example. Tell us about your book, Transformers for Natural Language Processing. What trajectory does your book follow to help its readers master transformers? Reading my book on transformers will help you save weeks and maybe months of effort trying to understand how they work by watching videos and reading blogs. The reader will begin by learning the original Transformer in depth. Once the transformer's building blocks are mastered, the reader will learn how to train and fine-tune a transformer. The reader will then build and run the main transformer models such as BERT, RoBERTa, GPT-2, T5, and more. The models will be applied to NLP tasks such as document summarization, Q&As, semantic analysis, and a wide range of NLP tasks. The book contains a method to analyze fake news with transformers. The book also goes beyond the architecture of transformers and into the world of usage. You will learn how to build, train, fine-tune, and implement transformers.
Read more
  • 0
  • 0
  • 8186

article-image-clean-coding-in-python-with-mariano-anaya
Expert Network
27 Jul 2021
7 min read
Save for later

Clean Coding in Python with Mariano Anaya

Expert Network
27 Jul 2021
7 min read
Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we want our project to successfully deliver features constantly at a steady and predictable pace, then having a good and maintainable code base is a must. The true nature of clean code rely on the fact that other practitioners should be able to read and maintain the code. Read the book Clean Code in Python, Second Edition, to know all about idiomatic Python, see the difference between good and bad code, and identify traits of good code and good architecture. There is no sole or strict definition of clean code. Moreover, there is probably no way of formally measuring clean code, so you cannot run a tool on a repository that will tell you how good, bad, or maintainable that code is. Sure, you can run tools such as checkers, linters, static analyzers, and so on, and those tools are of much help. They are necessary, but not sufficient. Clean code is not something a machine or script can recognize (so far) but rather something that professionals can decide. We interviewed Python expert, and bestselling author, Mariano Anaya on clean coding, importance of efficient code formatting and his recent book, Clean Code in Python, 2nd Edition. The interview in detail: To what extent can the inability to write efficient code harm/affect an organization/software?  In my experience, inefficient code can be so dangerous as to paralyze entire projects. I’ve seen services that needed to be re-written because of how unmaintainable they were. At some point, it became impossible to keep on making changes to that API, and the issues kept piling up, so it needed to be replaced by a brand-new system.  On another occasion, there was an application we knew had problems because of the way it was written, and its instability was causing frustration in customers, which permeated into the company. The buggy nature of the application wasn’t separate from the way it was written, rather it was the consequence. Customers were complaining about quality, and this shows up to which degree technical debt can harm an organization. I’ve seen this pattern several times, when the company must make the hard decision of stopping the release of new features to fix errors in the software.  I’d say that technical debt, if left untreated, can lead to very harmful results for a company.  2. What should developers keep in mind while starting out with legacy systems?  First to identify the degree of technical debt accrued. There are good software projects that have been designed correctly and their technical debt is relatively low (perhaps it’s just about updating some libraries to newer versions or moving parts of the code towards new features that weren’t available at the time it was originally written).  In the event of having a lot of technical debt, it’s important to understand what’s the most critical part that needs to be fixed. There’s certainly a part in the code, a module, or a functionality that’s responsible for most of the complaints from customers, and that’s what needs to be refactored more urgently.  It’s critical in this sense to do a proper analysis and have a plan about the improvements to make in the code, rather than jump straight to the code and start refactoring. This will help give a clearer idea of what needs to be changed, and the degree of the refactor needed. Meaning, if we’ll be fixing the code, or the situation requires a full rewrite. Generally, completely rewritten the application should be a last-resort kind of decision, although there are some obvious cases (for example, if the application was written with Python 2, then it’s clear that all the code will need to be changed).  3. What are the future advancements that you anticipate in Python?  It’s hard to know for sure what will happen with Python in years to come, but it’s interesting to see that in a similar way Python took inspiration and features from other languages, it’s now inspiring modern languages as well, but it also catches up with new features and programming models. Such was the case of all the improvements made for asynchronous programming, incorporated into the standard library.  I believe the asynchronous programming capabilities will continue to be enhanced in future releases.  I have also noticed some improvements towards trying to make Python more efficient, whether this means having a more lightweight interpreter by reducing the number of packages in the standard library, to try to solve the GIL problem. These are the kind of improvements I’m more hopeful to see.  4. What are some of the popular myths around writing clean code?   Perhaps the most common misconception is that clean code is about formatting code, or maybe even about PEP-8. In fact, relating technical debt only to code issues is another popular myth. Technical debt, is also about technology, being caught up with dependencies.  Being able to update your dependencies quickly in case there’s a security issue, it’s also a concern related to technical debt, and therefore to clean code. Things like the speed of iteration, how fast and frequently deployments can be made, the adaptability of the architecture, play an important role in the success of the project.  5. Tell us about your book, Clean Code in Python, Second Edition. What trajectory does your book follow to help its readers develop maintainable and efficient code?  The first chapter starts with an introduction to the importance of having a well-structured code base, presenting a framework for the chapters to come. This is supported by tools, and recommendations on how to setup a project for success, considering automated tools that will help us format the code, and setting up a pipeline to effectively deploy our code with good quality gates (controls, tests, different stages).  Then, the book introduces some Python-specific concepts, making strong emphasis on the particularities of Python’s syntax, and a more succinct way of writing code, taking advantage of the features the language has to offer.  There are some chapters that revisit general design ideas from software engineering, like object-oriented design and design patterns. From that point, the chapter will explore topics of software engineering in terms of how they can be implemented in Python, using the particularities of the language itself.  The idea of the book is to provide readers with the tools and concepts for them to understand what clean code means beyond any definitions given. It’s a pragmatic book; oriented towards a practitioner’s audience, so it makes special focus on how to get things done in an effective way, which often means accepting tradeoffs.  6. Does your book provide hands-on scenarios to practice the techniques it teaches? Absolutely! The book has a very pragmatic, hands-on approach. As each idea is introduced, it’s followed by examples that demonstrate how that implementation would work. Moreover, I’ve put special effort into making the examples as realistic as possible. Considering that the examples need to showcase an idea irrespective of superfluous details (that is, leaving out everything that’s not relevant to the explanation being made, and isolating the problem at hand), they’re still real-world scenarios, pieces of code any reader can relate to their daily job. There’re no made-up examples like Fibonacci-series or things anyone wouldn’t normally find on real code. Extrapolating from the examples, readers can use the code as reference to solve their problems.  To practice more, there’s a Github repository where all the code from the book lies, and it’s constantly updated. There’s also a Docker image for the entire setup of the book, with the environment already configured, that readers can use to test the code, and learn by modifying it.  About Mariano Anaya is a software engineer who spends most of his time creating software with Python and mentoring fellow programmers. Mariano's main areas of interests besides Python are software architecture, functional programming, distributed systems, and speaking at conferences. He was a speaker at Euro Python 2016 and 2017. To know more about him, you can refer to his GitHub account with the username rmariano. His speakerdeck username is rmariano.
Read more
  • 0
  • 0
  • 9498
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-understanding-the-fundamentals-of-analytics-teams-with-john-k-thompson
Expert Network
06 Apr 2021
6 min read
Save for later

Understanding the Fundamentals of Analytics Teams with John K. Thompson

Expert Network
06 Apr 2021
6 min read
Key-takeaways:   Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy.  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity.  Managing a successful analytics team and individual analytics professionals is different than managing any other type of team.  Data and analytics will be ubiquitous in the very near future. Analytics teams are different than any other team in the organization and analytics professionals are unique variant of creative professionals. Providing challenging, interesting and valuable work in the form of a personal project portfolio of work for a data scientist can be done and needs to be done to ensure productivity, job satisfaction, value delivery, and retention.  We interviewed Analytics Leader, and bestselling author, John K Thompson on data analytics, the future of analytics and his recent book, Building Analytics Teams. The interview in detail:  1. What are the fundamental concepts of building and managing a high-performing analytics team?  It is critically important to remember that data scientists are creative and intelligent people. They cannot be managed well in a command-and-control environment.  Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy. If they have a portfolio of projects and can manage their time and effort, the productivity of the team will be much higher than what is typically seen in teams managed in a traditional manner.  The relationship of the analytics leader with their peers and executives of the company is critically important to the success of the analytics team.  It is very important to realize that most analytics project fail at the point of where analytical models are to be implemented in production systems. 2. Tell us about your book, Building Analytics Teams. How is your book new and/or different from other books on Data Analytics?   Building Analytics Teams is focused on the practical challenges faced by people who are building and managing high performance analytics teams and the staff members who make up those analytics teams.    The book is different from other books in that it examines the process of building and managing a team from a holistic view.  The book considers the organization framework, the required processes, the people, the projects, the problems, and pitfalls.    The content of the book guides the reader through how to navigate these challenges and provides illustrations and examples of how to be successful.  The book is a “how to” guide on how to successfully manage the analytics process in a large corporate environment.  3. What was the motivation behind writing this book?   I have not seen a book like this, and I wish I had a book like this earlier in my career.  I have built a number of analytics teams. While building and growing those teams, I noticed certain recurring patterns. I wanted to address the misconceptions and the misperceptions people hold about analytics teams.    Analytics teams are unique. The team members who are successful have a different mindset and attitude toward project work and team work. I wanted to communicate the differences inherent in a high-performance analytics team when compared to other teams.  Also, I wanted to communicate that managing a successful analytics team and individual analytics professionals is different than managing any other type of team.    I wanted to write a guide for managers and analytics professionals to help them understand how the broader organization views them and how they can interface and interact with their peers in related organizational functions to increase the probability of joint success.  4. What should be the starting point for data analytics enthusiasts aiming to begin their journey in Data Analytics? How do you think your book will help them in their journey?  It depends on where they are starting their journey.    If they are in the process of completing their undergraduate or graduate studies, I would suggest that they take classes in programming, data science or analytics.    If they are professionals, I would suggest that they take classes on Coursera, Udemy or any other on-line educational platform to see if they have a real interest in, and affinity for, analytics.  If they do have an interest, then they should start working on analytics for themselves to test out analytical techniques, apply critical thinking and try to understand what they can see or cannot see in the data.  If that works out and their interest remains, they should volunteer for projects at work that will enable them to work with data and analytics in a work setting.  If they have the education, the affinity and the skill, then apply for a data science position.  Grab some data and make a difference!  5. What are the key skills required for someone to be successful working in Data Analytics? What are the pain points/challenges one should know?  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity. Without curiosity, you will find it difficult to be successful as a data scientist.  It helps to be talented and well educated, but I have met many stellar data scientists that are neither.  Beyond those traits, it is more important to be diligent and persistent.  The most successful business analysts and data scientists I have ever worked with were all naturally and perpetually curious and had a level of diligence and persistence that was impressive.  As for pain points and challenges; data scientists need to work on improving their listening skills, their written & verbal communication and presentation skills.  All data scientists need improvement in these areas.  6. What is the future of analytics? What will we see next?  I do believe that we are entering an era where data and analytics will be increasing in importance in all human endeavors. Certainly, corporate use of data and analytics will increase in importance, hence the focus of the book.    But beyond corporations, the active and engaged use of data and analytics will increase in importance and daily use in managing multiple aspects of - people’s personal lives, academic pursuits, governmental policy, military operations, humanitarian aid, tailoring of products and services; building of roads, towns and cities, planning of traffic patterns, provisioning of local federal and state services, intergovernmental relationships and more.    There will not be an element of human endeavor that will not be touched and changed by data and analytics. Data is ubiquitous today and data and analytics will be ubiquitous in the very near future.  We will see more discussions on who owns data and who should be able to monetize data.  We will experience increasing levels of AI and analytics across all systems that we interact with, and most of it will be unnoticed and operate in the background for our benefit.  About:  John K. Thompson is an international technology executive with over 30 years of experience in the business intelligence and advanced analytics fields. Currently, John is responsible for the global Advanced Analytics and Artificial Intelligence team and efforts at CSL Behring. 
Read more
  • 0
  • 0
  • 5278

article-image-imran-bashir-on-the-fundamentals-of-blockchain-its-myths-and-an-ideal-path-for-beginners
Expert Network
15 Feb 2021
5 min read
Save for later

Imran Bashir on the Fundamentals of Blockchain, its Myths, and an Ideal Path for Beginners

Expert Network
15 Feb 2021
5 min read
With the invention of Bitcoin in 2008, the world was introduced to a new concept, Blockchain, which revolutionized the whole of society. It was something that promised to have an impact upon every industry. This new concept is the underlying technology that underpins Bitcoin.  Blockchain technology is the backbone of cryptocurrencies, and it has applications in finance, government, media, and many other industries.   Some describe blockchain as a revolution, whereas another school of thought believes that it is going to be more evolutionary, and it will take many years before any practical benefits of blockchain reach fruition. This thinking is correct to some extent, but, in Imran Bashir’s opinion, the revolution has already begun. It is a technology that has an impact on current technologies too and possesses the ability to change them at a fundamental level.  Let’s hear from Imran on fundamentals of blockchain technology, its myths and his recent book, Mastering Blockchain, Third Edition. What is blockchain technology? How would you describe it to a beginner in the field? Blockchain is a distributed ledger which runs on a decentralized peer to peer network. First introduced with Bitcoin as a mechanism that ensures security of the electronic cash system, blockchain has now become a prime area of research with many applications in a variety of industries and sectors.   What should be the starting point for someone aiming to begin their journey in Blockchain? Focus on the underlying principles and core concepts such as distributed systems, consensus, cryptography, and development using no helper tools in the start. Once you understand the basics and the underlying mechanics, then you can use tools such as truffle or some other framework to make your developer life easier, however it is extremely important to learn the underlying concepts first.   What is the biggest myth about blockchain? Sometimes people believe that blockchain IS cryptocurrency, however that is not the case. Blockchain is the underlying technology behind cryptocurrencies that ensures the security, and integrity of the system and prevents double spends. However, cryptocurrency can be considered one application of blockchain technology out of many.      “Blockchain is one of the most disruptive emerging technologies today.” How much do you agree with this? Indeed, it is true.  Blockchain is changing the way we do business. In the next 5 years or so, financial systems, government systems and other major sectors will all have blockchain integrated in one way or another.   What are the factors driving development of the mainstream adoption of Blockchain? The development of standards, interoperability efforts, and consortium blockchain are all contributing towards mainstream adoption of blockchain. Also demand for more security, transparency, and decentralization in some sectors are also key drivers behind more adoption, e.g., a prime solution for decentralized sovereign identity is blockchain.   How do you explain the term bitcoin mining? Mining is a colloquial term used to describe the process of creating new bitcoins where a miner repeatedly tries to find a solution to a math puzzle and whoever finds it first wins the right to create new block and earn bitcoins as a reward.    How can Blockchain protect the Global economy? I think with the trust, transparency and security guarantees provided by blockchain we can perceive a future where financial crime can be limited to a great degree. That can have a good impact on the global economy. Furthermore, the development of CDBCs (central bank digital currencies) are expected to have a major impact on the economy and help to stabilize it. From an inclusion point of view, blockchain can allow unbanked populations to play a role in the global financial system. If cryptocurrencies replace the current monetary system, then because of the decentralized nature of blockchain, major cost savings can be achieved as no intermediaries or banks will be required, and a peer to peer, extremely low cost, global financial system can emerge which can transform the world economy. The entire remittance ecosystem can evolve into an extremely low cost, secure, real-time system which can include people who were porously unbanked. The possibilities are endless.   Tell us a bit about your book, Mastering Blockchain, Third Edition? Mastering Blockchain, Third Edition is a unique combination of theory and practice. Not only does it provides a holistic view of most areas of blockchain technology, it also covers hands on exercises using Ethereum, Bitcoin, Quroum and Hyperledger to equip readers with both theory and practical knowledge of blockchain technology. The third edition includes four new chapters on hot topics such as blockchain consensus, tokenization, Ethereum 2 and Enterprise blockchains.  About the author  Imran Bashir has an M.Sc. in Information Security from Royal Holloway, University of London, and has a background in software development, solution architecture, infrastructure management, and IT service management. He is also a member of the Institute of Electrical and Electronics Engineers (IEEE) and the British Computer Society (BCS). Imran has extensive experience in both the public and financial sectors, having worked on large-scale IT projects in the public sector before moving to the financial services industry. Since then, he has worked in various technical roles for different financial companies in Europe's financial capital, London. 
Read more
  • 0
  • 0
  • 5579

article-image-bringing-ai-to-the-b2b-world-catching-up-with-sidetrade-cto-mark-sheldon-interview
Packt Editorial Staff
24 Feb 2020
13 min read
Save for later

Bringing AI to the B2B world: Catching up with Sidetrade CTO Mark Sheldon [Interview]

Packt Editorial Staff
24 Feb 2020
13 min read
Sidetrade is an organization that is on a mission to transform customer engagement in the world of B2B marketing with the help of artificial intelligence. With its own AI technology - Aimie - it's now in a strong position to carve out a niche for itself in a market that shows no signs of slowing down. What makes the company even more exciting for us at Packt is that they're just a stones throw away from our offices in Birmingham. To get the lowdown on Sidetrade we spoke to CTO Mark Sheldon about the company's evolution and what the future might hold. Read the interview below... Packt: Tell us a bit about your background and what you're up to today. Mark Sheldon: I started my career as a developer and moved into the management of a large technical team, at one of the ‘big six’ utility companies in the UK. Back in 2013, when the AI buzz was in its infancy, I co-founded a predictive analytics software company called BrightTarget. It was clear there was a better way for B2B organisations to gain more value from their data, and cloud computing and machine learning were clear market changers. In 2017 BrightTarget was acquired by Sidetrade and at this point I became part of their technical leadership team, with the goal of making Sidetrade an AI-driven company. More recently I moved into the Group CTO position (as part of the global leadership team), responsible for more than 85 staff. Sidetrade has a total of 250 staff across six offices in Europe, with expansion planned in 2020. The AI boom and its impact on the B2B landscape Packt: Gartner predicts that this year 30% of B2B companies will use AI to augment at least one of their primary sales processes. What's your take on this? Mark: Yes, only 30%, so this market is still just emerging. Although machine learning has been around for decades, there's still a lot of confusion around AI in many B2B organisations, mostly caused by all of the market and vendor hype. Very few have successfully deployed machine learning and are able to demonstrate value. However, for those that have, the potential for commercial gain deploying AI is huge. The most common processes impacted in sales & marketing are those which involve interactions with customers or prospects at scale; where the decision making of a human can be augmented or improved. e.g. Identifying customers most at risk of churn, customers with the best opportunity to sell more product or prospects with the highest propensity to become a customer. AI really allows sales & marketing teams to optimize their time and marketing spend. BrightTarget and Sidetrade Packt: You co-founded BrightTarget which was acquired by Sidetrade in 2016. Could you tell us a little bit about BrightTarget?   Mark: BrightTarget was founded in 2014, on the principle of helping B2B organisations deploy AI without the need for expensive and hard to find data scientists. We invested significantly in automating the process of data loading, processing (feature generation), model building and monitoring. We achieved strong traction with some large enterprise accounts and were recognized by Forrester as a “Strong Performer”. Packt: How did the acquisition come about? Mark: At this time [when BrightTarget was founded], Olivier Novasque (the Sidetrade CEO and founder) had a clear vision to transform Sidetrade into an AI-driven business. So the acquisition of BrightTarget in November 2016 was a natural fit with the ambitions of Sidetrade and their goals. This has proven to be a great move with the launch of Aimee (AI engine) which has contributed significantly to the subsequent revenue growth following the acquisition. Aimie: Sidetrade's AI technology Packt: Tell us a bit more about Aimie. How does it work? What's the thinking behind it? Mark: Aimie is Sidetrade’s propriety AI technology that helps our customers augment their daily experience within our products. For example, Aimie helps every cash collector make the very optimum collection decisions, even if they have only joined the company two weeks ago! This AI technology is at the heart of our SaaS platforms – Augmented Revenue (helping B2B organisations to manage their Revenue; including managing revenue at risk and finding opportunities to grow revenue from existing customers) and Augmented Cash (again, helping B2B organsisations improve working capital by better cash collection). We also have an unrivalled data lake built up over 20 years. We now have 230 million B2B payment experiences, totaling sales of over 700 billion Euros [£621 bn] which we train our AI on, and enriches our client’s own data. More good quality data for AI to train upon means for better predictions and outcomes. For example (reported in Fortune and Forbes), one of our enterprise clients is Manpower, one of the biggest recruitment firms in the world. With an annual income of €4 bn per year, Manpower France collects 1.3 million receivables from 80,000 companies. To handle this volume, and increasingly complex payment procedures, Manpower’s finance department started using Sidetrade technology in 2013, and introduced Aimie in 2018-19. Manpower started Aimie off with two customer portfolios for a period of two months. Aimie analyzed what worked before for Manpower, directly executed automatic follow-up actions, and established which past-dues to target first. She considered available resources (staff hours, workloads) in order to take optimal actions. Encouraged by the results, Manpower ramped up their use of Aimie. Within four months, Aimie was managing nearly 60% of their single-site customers, which represents over 5,000 accounts, and nearly 10,000 follow-up actions per month. Manpower has over 700 payer centers to manage, making it impossible for a manager to call all the debtors in their portfolio. Aimie helped them decide which customers to contact first. After nine months of testing, the results were clear: with support from Aimie, effectiveness of recovery actions grew 12%. That’s a good improvement in cash collection which boosts working capital, vital for business. Sidetrade's data science team Packt: Sidetrade has a data science team, what is it and how does it function? How does your team of data engineers, data scientists work in tandem with the product teams to create AI powered B2B solutions for customers? Do they also work on customized solutions? Mark: Dr Clement Chastagnol (PhD in AI and robotics) leads our data science team. We currently have a team focused more on research topics, who really push the boundaries on some of the latest aspects of AI. However, the majority of our sata scientists work directly within product-led squads (with a mixture of different data, application & ops engineers). The reason for this is to ensure we deliver actionable AI/ML into our products on a regular basis, to ensure we are customer (therefore product) focused. As a SaaS company with 1000’s of customers/users, almost all of the work we do is to improve our overall products, and adding features that benefit the majority. This also applies to data science, although we have a very advanced M/L platform which allows us to automatically build and manage 1000’s of M/L models, that are often client-specific. In terms of research, each year we work with the French Government’s business ombudsman, to research and produce an index report of all B2B business payment disputes, including figures by industry, and length of delay. This involves our data scientists analysing over 9,000 French customer companies representing, 91% of large corporations and organizations with 250 to 5,000 employees. The data analyzed covers over 2.8 million invoices totaling €12bn. Also as part of our research work, we have received funding from the French Government, EU Commission agencies, the French national agency for research, and the DataAi Institute on the following projects: Eurofirmo, which is an index of all 26 million businesses in the EU and Britain, including headcount and revenue which has never been done before. Re-search Alps, which is a collaboration with academics from four universities that aims to track all research-active research institutes across seven European countries. It records their research projects, funding, publications, patents, and other academic output. Dirty Data: Two research axis are funded. One revolves around dirty data integration, funded by the ANR (Association National de Recherche). The other strives to develop new techniques to analyze incomplete data. It is funded by the DataAI institute. As part of this project, we’ve worked with Gaël Varoquaux (ML Researcher & creator of Scikit-learn) which has been great. Alongside all these projects, the team has recently worked with Facebook Research on the topic of data drift, as well as publishing research in journal papers, academic conference attendance and presentations, support for PhD students, hackathons and guest lectures at universities. Developing new talent in the AI space Packt: Sidetrade recently launched The Code Academy, what is it? How can developers take part in this initiative and how will it benefit them? What are other key initiatives by Sidetrade? Mark: The Code Academy is designed to develop the next generation of AI talent and is important for Sidetrade to maintain its position as a leading AI powered customer platform. The Code Academy, which was piloted in 2018, is part of Sidetrade’s commitment to providing engineering skills and jobs for young people in the Midlands, to keep the UK at the forefront of the AI industry. It’s a new, rapid approach to training and job creation. We welcome trainees with computing and non-computing backgrounds who can demonstrate ability and passion for technology. It’s rapid, as we design and deliver the academy in-house over four weeks, with a lot of support from senior developers in the team. We train for job roles, rather than just impart coding. And it’s offered without cost to the trainee, so money isn’t a barrier. At the end of four weeks the trainees are given a challenge, and asked to present their work to an audience of senior staff. Academy modules include: • Becoming familiar with Git • Setting up VSCode for .NET & Web development • How to load a relational data set through pgAdmin (CSV) • Learning how to write TSQL to analyse and find trends within a data set • Learning about the concept of & develop a basic RESTFul service • Introduction to angular (using http://angular.io/start) • Learn to connect all layers of the stack • Use Kanban (Trello) to manage projects • How to define an MVP In 2018 we trained 10 people and offered software and data engineer roles to three. In 2019, we revamped the academy, making it much more practical, and selected 12 trainees from 50 applicants. The quality of the talent was so good we offered five trainees data and software engineer roles with our professional services and R&D teams. Expanding the team Packt: You’re about to move into a new, much bigger office in central Birmingham. What are your challenges in terms of expanding the team? Can you elaborate more on the challenges faced by the team in terms of working with AIOps. Mark: That's right, we’re going to open a new Tech Hub that will house a much bigger team of data and software engineers working across the full stack. We’ll also run our 2020 Code Academy from the hub. A special launch event, Together for Tech, will make the opening official on 27th February 2020, with VIP guests, tech and business stakeholders. We’ll also be announcing a major investment in R&D and jobs creation. There is huge potential for Birmingham to become a tech powerhouse within Europe. The challenge for me is hiring enough senior level tech professionals. These people are needed to lead teams, develop staff, and keep pushing the boundary of what we can do. There’s overall a challenge to hire enough qualified professionals for the tech sector, and that is more acute at the senior levels. I think there’s a temptation for experienced tech types to head to London or even America, so it’s a challenge for the region to retain great talent. The team has spent a lot of time on ‘AI ops’, which has emerged in the past three years. So, the other challenge is how do we actually productionise all of the models and data engineering that our team and platform is producing. How do we deploy & run them in production? How do we monitor them? This is all about managing Machine Learning models at scale. For me, the sheer volume the team have to deal with is the biggest thing. We train thousands of different predictive models for different clients, and they are changing all of the time in terms of the data they are trained on. So actually, building workflows and processes to help monitor those in production at that kind of scale without having to scale the team in proportion is probably the biggest challenge. The future of AI and automation for B2B marketing and sales Packt: AI is a really broad term. It often gets used interchangeably with machine learning and deep learning. Do you think this confusion is risky or dangerous? Do you think people should simply stop talking about AI in favour of machine learning and deep learning? Mark: I think we’ve reached a point where AI has become a buzz word and a catch-all phrase. I think we can and should start being more sophisticated about what we mean, it’s a work of education. From a vendor and business point of view, AI is no longer a differentiator as everyone is talking about it, so it makes it harder to stand out. But as decision makers become more educated on the topic, it's clear which vendors have the expertise and depth of data to deliver true AI-powered solutions. Packt: What do you expect to come next in the B2B Sales and Marketing space? And how is automation of this space likely to impact other industries? Mark: My prediction for the next big thing to come in the AI space would be a major breakthrough in Quantum computing by either Google or one of the startups specializing in the field. In the B2B sales and marketing space I think the next step is just wider adoption and trust that AI can augment or even outperform humans. Most businesses will need to go through a cultural and often organisational shift that is required to get the full commercial benefits out of AI. Thanks for taking the time to speak to us Mark! We'll be watching Sidetrade closely over the months and years to come. It's also great to see such an exciting and innovative company growing in Birmingham, right near the Packt office. Learn more about Sidetrade: www.sidetrade.com
Read more
  • 0
  • 0
  • 12043

article-image-on-adobe-indesign-2020-graphic-designing-industry-direction-and-more-iman-ahmed-an-adobe-certified-partner-and-instructor-interview
Savia Lobo
24 Jan 2020
12 min read
Save for later

On Adobe InDesign 2020, graphic designing industry direction and more: Iman Ahmed, an Adobe Certified Partner and Instructor [Interview]

Savia Lobo
24 Jan 2020
12 min read
Gone are the days when graphic design was solely focused on the obvious graphic elements of a product like its packaging and marketing materials. Today the impact of technology and the digital revolution is huge on how we communicate, the way we work, and even the way we socialize. Graphic design is no exception to this change. Technology plays a major role in the creation of digital work available in many fields. For example, portfolio design, presentations, signage, logos, websites, animations, and even architectural production have all traveled far since the dawn of the digital revolution. This graphic designing evolution has enabled brands with greater exposure online and enabled users with engaging and interesting graphics. Recently, we had a conversation with Iman Ahmed, an Adobe Certified Partner and Instructor and CompTIA Certified Technical Trainer on the current graphic design industry, various design tools, and how is the future. Iman has 19+ years of solid international experience in delivering the skills of various applications, from Architecture, Graphic design, Infographics, Motion Graphics, Photo Editing, Magazine and book design, video production, 3D Modelling. We also discussed her recently published book, Mastering Adobe InDesign 2020, a step-by-step guide to learn InDesign Framework, Workspace, Project setup, Master pages, Pages, Text, among other core features. It also explores new features in InDesign 2020 release and how are they useful to graphic designing professionals. Adobe InDesign, a preferred choice for designing text-heavy documents Adobe’s Creative Suite of tools offers graphic designers all kinds of solutions needed to create professional and engaging graphics. From photo editing to typography tools to sound design Adobe literally has everything covered for any type of design project. So how can one compare Adobe InDesign with other tools, specifically with Illustrator and Photoshop? Iman clarifies, “Adobe applications has no relation to the user's level of experience, it is all about the purpose. Adobe Photoshop is the best application for photo editing, while Adobe Illustrator is the best used for vector design, and Adobe InDesign is created for layout design, as it has tools and options that facilitate the layout design process.” Graphic designers use InDesign when they need to layout a multi-page, text-heavy piece. For example in print or digital, InDesign is used to layout text. It is a one-stop solution for designing a magazine, brochure or a booklet. Out of the three applications, InDesign has the most robust typesetting features available. It also integrates with Adobe Digital Publishing Solution, allowing designers to create fully interactive e-books, magazines, and other digital publications. Key features in Adobe InDesign 2020 version Adobe released InDesign 2020 version in November 4th this year. This release brings significant upgrades and changes as per Adobe InDesign user requests. With this release, InDesign tool now supports SVG file formats. Graphic designers will be able to use infinitely customizable fonts or variable fonts within InDesign. There is a more efficient way to place lines, or “rules,”  between columns of text. Additionally, it includes improvements to InDesign’s core performance and launch times for up to 25% faster. Iman says about the new release, “Adobe InDesign fixed a lot of bugs in this release, and offered an improvement to resolve document corruption. Opening a particular file, saving and closing it has become faster in this release. And text editing is also faster than earlier releases. There are a bunch of Text features which have been added to Adobe InDesign 2020 release, such as variable fonts and column rules, a new feature in spell check, plus five new languages - Thai, Burmese, Lao, Khmer, and Sinhalaare are supported in this release.” She further discussed why InDesign is considered to be a tool of choice for Multi-page projects, and how the Master page feature is one of the key features of InDesign. She says, “Master page is one of the most powerful features of Adobe InDesign, it acts as a template, as master page will host all common features that are needed to repeat in document pages, and any changes made in the master page, will reflect into all document pages that follow this particular master page. Hence, a professional designer has to be smart enough in designing the document, to decide how many master pages to use in one document.” You can read Chapter 6 of this book, Mastering Adobe InDesign 2020, to know more about Master page feature and how to create different Master pages for your document. Iman’s education in architecture designing is foundational to her graphics designing career Iman has studied architecture design, and she mentions that her undergraduate and postgraduate education in architecture has contributed to her success as an Engineering Applications’ Instructor and as a Graphic Design Instructor and designer. Iman talks about her graphic designing journey and shares this quote from Frank Lloyd Wright, “The mother of art is architecture. Without an architecture of our own we have no soul of our own civilization.” Iman takes pride and feels lucky to study the mother of all arts, which guided her strongly in her graphic designer career. She mentions that the steps she took can be common with everyone who feels passionate about learning graphic design. Undoubtedly architecture was the cornerstone in her pathway, but gaining knowledge of tools is important too to accomplish for a good designer. She says, “My graphic design path literally started when I used Adobe Photoshop for the first time in an architecture presentation. Then I felt more interested to learn more about Adobe Photoshop and photo editing. One year later, I started my career as an Adobe Photoshop instructor, and a graphic designer for the same training center, beside my work as an architect.” “I taught myself more about Photoshop, plus a bunch of applications using offline help, as YouTube was not available. For years, I kept searching for design ideas, learn new applications, teach what I learnt and practice a lot. Honestly, this is not enough, to be a professional graphic designer, I still need more.” On her inspiration to write Mastering Adobe InDesign CC 2020 Iman says, “As a trainer, it is the most joyful moment when you help others to learn and improve. For more than 19 years, I am teaching people from the Middle East, Europe, USA, Canada, Australia, Asia and Africa. Their nationalities are different, but their enthusiasm looks the same. My dream is to spread knowledge and help more and more people to learn. So for me writing a book about Adobe InDesign was a great chance to share my knowledge and experience to a broader audience. In my book I have shared 19 years of experience, it is not only about InDesign, but also about the design process. It helps both designers and non-designers to work more efficient with Adobe InDesign tool and makes them aware of the steps before implementing the design. It covers various exercises  and examples to enhance reader's skills, and sharpens the skills of intermediate and expert users.” The growth story of the graphic design industry is far from over According to the U.S. Bureau of Labor Statistics, the graphic design field is expected to grow by 3% from 2018 to 2028, which is slower than average. More graphic designers are jumping to be freelancers as compensation is a major issue in a full time job role. Companies pay lesser salaries at a junior and intermediate level roles in the graphic design department. Hence, designers prefer freelancing as they can sell their creatives and design templates to multiple clients with less effort. Additionally, there are perks to being your own boss. For instance, you get to set your own working hours and choose your own jobs. As freelancers are in millions all over the world, it gets difficult to track their number in the statistics while calculating the industry growth rate. And it becomes an outlier in showing the actual industry growth results thus it shows slower than the average. Iman shared her thoughts which are on similar lines, she says, “Competition is aggressive everywhere, the first reason for this competition is the unemployment phenomenon that we are facing in this decade. Globalization and platforms that offer freelancing designers to hire from everywhere negatively affects the field and market. As some designers are cheap, as per their economy and currency standards.” But on the other hand, working at a firm has its own perks. The company will be responsible for maintaining your work environment, purchasing equipment and software, and building a client base. And graphic designers will be more likely to work regular hours for a predictable paycheck. On this Iman says, “If you work hard on your portfolio, and know how to network well with others, definitely you will be hired by reputed organizations.” If you're not sure of what to start with, it's always a good idea to intern at a small or medium firm and gain experience in the industry. Then get to know your work style, and choose what fits best. Graphic designers ditching corporate culture for freelancing Out of more than 250,000 graphic designers in the U.S., almost 25% are self-employed. This number is expected to rise in the coming years due to millennials ditching the corporate culture for a freelance lifestyle. On this we asked Iman about a typical graphic designer career graph and what some pathways available are. We also asked if professionals require a degree to enter this field. Iman believes that ”graphic designer career path varies, and different routes can be the right pathway to the graphic design career. Your route depends on your target, would you like to be professional in logo design, branding, web design, packaging design, book and magazine design, some of them, all of them or even more!” Further she discussed major steps that a graphic designer needs to take to start their journey. Step 1: Learn graphic designing Iman recommends learning graphic design in a school which is specialized in graphic designing. She says, “learning applications is not enough, applications are only tools that will help you to finish your work in a smart and easy way.  But without design fundamentals and concrete foundations, a good design will not be achieved.” Step 2: Get inspired by others “Don't Reinvent The Wheel, learn from people with experience to save time,” says Iman. She suggests, “to watch others’ work, about the latest styles of design which will inspire and be another source of knowledge to learn from.” Step 3: Practice, practice, practice! Practice makes one perfect! Iman advises to “create more than one idea for a design and find every possible way to create a design that forms the message you need to communicate. You must sketch, re-sketch, refine your sketch, implement it and edit it, for an exclusively perfect design.” Step 4: Read more books Iman adds, “read more and more books about graphic design theory, history, elements of design, color theories and other books about designing. These books have in depth knowledge and a rich experience to develop designing skills.” Step 5: Selecting the right tool for designing Iman emphasises on being smart in selecting the tool for designing. She says, “you need to be aware, which application will help you in what task. For instance, one can create logos using Adobe Illustrator, then edit photos in Adobe Photoshop, and finally collect design elements smartly in Adobe InDesign.” On graphic designing future and what to expect next When it comes to the future of graphic design, the big thing on everyone’s mind is animation and VR. Digital media is rapidly becoming the future of graphic design. For more we asked Iman on what to expect next. She explains, “All the fields such as print media, web designs, animation and VFX are ways to form a message using visual communications’ tools, and send it to the target audience, and it depends on the field to field, there is a proper way to use, based on the purpose. Whereas in the design industry, we cannot predict what will happen tomorrow, but for sure, the competition in using visual communication tools, will always remain a major factor in the improvement of all design fields.” Get Iman’s book, Mastering Adobe InDesign 2020 today to start exploring the InDesign workspace, the different menus, and functions, along with gaining insights into planning and executing a design. You’ll also get hands-on with creating your first project, focusing on aspects such as working with text, images, and shapes. Author Bio Iman Ahmed is an Architect who loves Art and Design, she started practicing Graphic Design in 1999, and during her 20 years of experience as a graphic designer, she created a lot of designs and magazines using Adobe Photoshop, Illustrator and Indesign. Her passion for teaching was genuine and that drives her to work hard to be a special kind of trainers. She started her teaching career in 2004, she is an Adobe user since 1999 and an Adobe Certified Instructor since 2008. Iman is a CompTIA Certified Technical Trainer since 2008, and she had been interviewed by CompTIA in November 2016 as a model of a special kind of trainers, who studied and applied CompTIA ( CTT+) program in a special way that successfully polished her skills and teaching style. Iman is a classroom Instructor and an online trainer who delivers courses in the Middle East and UK. Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images
Read more
  • 0
  • 0
  • 10994
article-image-is-devops-experiencing-an-identity-crisis-interview
Packt Editorial Staff
07 Jan 2020
7 min read
Save for later

Is DevOps experiencing an identity crisis? [Interview]

Packt Editorial Staff
07 Jan 2020
7 min read
The definition of DevOps is a hotly disputed topic among amateur practitioners and experienced engineers alike. Ironically, DevOps was actually supposed to bring some order into the messy and chaotic environment of IT software development. In DevOps Paradox, DevOps expert Viktor Farcic talks to fellow industry figures who reveal their perspectives on the trend and what it means to them. In this article, we’ll see what some prominent people in the DevOps community have to say about DevOps. The quotes in this article are taken directly from the book. So, how are we supposed to incorporate DevOps into our organizations if we don't even know what it is? Let’s hear Viktor’s thoughts about what DevOps is and what are it’s trends and future aspects: What is DevOps and why do we need it? What is DevOps and why do we need it? What is the most important thing DevOps helps us achieve? What are the factors that drive the development of DevOps? Viktor Farcic: Almost everyone gives a different answer to the question “What is DevOps?”—there is a huge discrepancy between the idea and the implementation. I believe that the main objective of DevOps is to enable self-sufficient product-oriented teams capable of having a full control of their products. That is in stark contrast with the way many companies operate today. Normally, a lifecycle of an application is split between many teams. Business analysts define requirements, architects work on guidelines that must be followed and frameworks that must be used, developers write code, testers are in charge of validations, operators deploy new releases, and so on and so forth. The problem is that each of those groups belong to different departments and have different and often opposing objectives. Instead of fostering collaboration towards a common goal, different teams (departments) are looking for their own shortsighted interests. DevOps tries to remove organization based on the type of tasks performed and unite all the expertise required for the whole lifecycle of an application into a single team reporting to a single person. It forces us to work together and it builds empathy. Is DevOps a process? Or a set of technologies? What's your perspective on this area of debate? Viktor: DevOps is neither of the two. Unlike some agile frameworks (e.g. Scrum), there is no prescribed process to follow. Similarly, there is no technology we can adapt that will convert us into “DevOps” teams. It is only an idea that developers, operators, and everyone else needs to work together instead of being isolated in different silos. That does not mean that technology is not important; it is, but often for other than obvious reasons. Every new technology is created by a group of people that worked together to create it. As such, it always reflects processes of those involved in creating it. Those processes, on the other hand, are a result of the culture of the people following it. In other words, every tech is a result of certain processes created as a result of the culture of the team that worked on it. So, even though we use an end result, it is a product of a process created in a specific culture. If we adopt a technology that does not match our own processes and culture, it will produce suboptimal results at best. So, we must either adopt technology that matches our processes and culture or use it to change them. One cannot work without the other. All in all, DevOps is an idea, not much more than that. It’s up to us to figure out which processes and technology will help us make it reality. What does a DevOps engineer do? Is it even a real job role? What are the core roles of DevOps Engineers in terms of development and Infrastructure? Viktor: I don't think there is such a thing as a “DevOps Engineer.” The term was invented by people who were not ready to apply the changes DevOps leads to. Most of the time, a “DevOps Engineer” is just a different name for someone working in shared services, operations, infrastructure, or whichever department was first to be renamed into DevOps. Do you think DevOps is experiencing an identity crisis? Viktor: DevOps was never defined as a process. Agile, for example, got quite a few implementations that tell people what to do. Among others, we got Scrum that clearly defines what to do. We could even argue that Scrum, as being a set of practices that must be followed, is against the spirit of Agile, but that’s a conversation for some other time. What matters is that no one defined the process behind DevOps. There is no such thing as a set of steps that must be followed daily or weekly. It is just an idea that we should work together and not throw things over department walls. As such, the way to accomplish that is open to interpretation. So, DevOps never had a clear identity, so it cannot have an identity crisis either. It’s just an idea, and it’s up to each one of us to try to figure out how to make it reality. The biggest challenges in DevOps today What are the biggest challenges in DevOps at the moment? Viktor: Currently, DevOps is mostly misunderstood. More often than not, companies just rename a department. In some companies, shared services become DevOps teams; in others it is infrastructure, operations, or any other department. It’s as if it was a race and the first department to change their name into “DevOps” was a winner. Logically, changing the name means nothing and does not result in any tangible improvement. The key challenges are related to people and culture. DevOps is not easy because it challenges current organizational structure, it restructures power within an organization, and it questions the need for the existence of many departments. As such, middle management is often against it because it is perceived as a risk to their position. At the same time, people who spent many years doing the same thing over and over again feel that their credibility is at risk if the structure that allowed them to climb company ladder is removed. Congratulations on the release of DevOps Paradox. Could you talk a little bit about the idea behind it and what you hope it achieves? Viktor: I go to a lot of conferences and I realized that scheduled talks are not the main takeaway from them. True, I learned things by listening to them, but the primary reason I continue attending are “corridor talks.” Conferences are a great opportunity for me to find interesting people and have amazing discussions. Unlike scheduled talks, those conversations are not structured. I do not prepare a list of questions for the next person I’ll meet in between talks or at a party. Instead, we’d just start talking about a random thing that happens to be interesting. I wanted to bring those types of conversations to people who cannot travel the world and be every moth in a different conference in a different country. So, I did not have any real goals for this book, other than speaking with people about any topic, as long as it is related to DevOps. Since DevOps can be anything related to software development, you can say that the scope of the book is as broad as it can be. My true goal was to enjoy having conversations with people. I did not prepare questions in advance. Instead, I just gathered people I’d like to speak with if I’d meet them in a conference and say, “Let’s have a coffee and see what you were up to since the last time we met”. Some of those I interviewed are my friends, while others I met for the first time. Some work for huge enterprises, while others work in startups. Some worked in software industry for many years, while others are young up-and-coming experts. I wanted to make sure that the book gives as many different opinions as possible. Find Viktor Farcic's DevOps Paradox on the Packt store. Read the first chapter for free on the Packt subscription platform.
Read more
  • 0
  • 0
  • 12299

article-image-luis-weir-explains-how-apis-can-power-business-growth-interview
Packt Editorial Staff
06 Jan 2020
10 min read
Save for later

Luis Weir explains how APIs can power business growth [Interview]

Packt Editorial Staff
06 Jan 2020
10 min read
API management is a discipline that has evolved to deliver the processes and tools required to discover, design, implement, use, or operate enterprise-grade APIs. The discipline bisects two distinct communities and deserves the attention of both: developers who build APIs and business and IT leaders looking at APIs to drive growth. In Enterprise API Management, Luis Weir shows how to define the right architecture, implement the right patterns, and define the right organization model for business-driven APIs. The book explores architectural decisions, implementation patterns, and management practices for successful enterprise APIs. It also gives clear, actionable advice on choosing and executing the right API strategy in your enterprise. Let’s see what Luis has to say about API management and key principles to improve API design for enterprise organizations. What API management involves What does API management mean and involve? Luis Weir: In simple terms, it’s the discipline that aligns tools with processes and people in order to realize the value from implementing enterprise-grade APIs throughout their full cycle. By enterprise-grade, I mean APIs that comply with a minimum set of quality standards, not just in the actual API itself (e.g. use of normalize semantics, well-documented interfaces, and good user experience), but also in the engineering processes behind their delivery (e.g. CICD pipelines and robust automation at all levels, different levels of testing, and so on). Guiding principles for API design What are the some guiding principles that can improve API design? LW: First and foremost is the identification of APIs themselves. It’s not just about building an API for the sake of it and value will just come. Without adopting a process (e.g. ideation) that can help identify APIs that can truly add value, there is real risk that an API might just end up being a DOA (dead on arrival), as there might not even be a need for it. Assuming such a process has taken place and APIs that have real potential to add value have been identified, the next step is to conceptualize a design. It is at this point that disciplines such as domain-driven design can help produce such a design in a way that both business and IT people can relate to it. This design should capture things such as consuming applications, producing applications (data sources), data entities, and services involved in the concept. It should clearly and simply define the relationship between the components and define boundaries (bounded contexts) as these will be key not just in the actual implementation of the API or APIs (as it may end up being more than just one), but also in the creation of the API specifications themselves thought IDLs (e.g. an OAS file, API blueprint, GraphQL schema, .proto file in gRPC to name a few). The next and very important step for producing a good API is to follow an API design-first process. This process ensures that the API specifications and API mocks (produced from the API specifications themselves) undergo a series of validations by potential consumers of the API themselves as well as other relevant parties. The idea is to obtain as much feedback as possible through multiple iterations (or feedback-loops) to ensure that the API is fit for purpose but that it also delivers a good user experience. For more details, please refer to the API Life cycle section in my book. Testing APIs What are different API testing approaches? LW: At the very minimum, API testing should involve the following testing approaches: Interface testing Functional testing Performance testing Security testing Interface testing is used to validate that an API implementation conforms to the API specification. Functional testing is used to validate that the API delivers the functionality that it is meant to deliver and with the expected behavior. Performance testing ensures that APIs can actually handle the expected volume and scale as required. Security testing ensures that the API is not vulnerable to common threads such as those described in the OWASP top 10 projects. Other more sophisticated testing approaches may include A/B testing and Chaos testing. A/B testing dynamically tests new API features against a subset of the API audience and in a running environment (even production). Chaos testing (e.g. randomly shutting down components of the solution in production to ensure the API is resilient) should be considered as the API initiative matures. Understanding API gateways What are the key features of an API gateway? LW: There are many capabilities expected of an API gateway and these are all well described in the API exposure section in my book. However, in addition to such capabilities, which in my view are all essential, there are some key features that put modern API gateways (3rd generation) apart from more traditional ones (1st and 2nd gen). These are: Lightweight: Requires minimum disk space, CPU, and RAM to run. Hybrid: Can run on-premise, on cloud, and on multiple cloud platforms (e.g. AWS, Azure, Google, Oracle, etc). Kubernetes ready: k8s has become the most popular runtime platform for microservices. Modern APIs should be easily deployed into the K8s runtime and support many of the patterns as described in my book. Common Control Plane: If the management of APIs deployed on gateways isn’t centralized in some way, shape, or form, then allowing enterprise users to discover and (re)use already built (or being built) APIs will be extremely difficult and will lead to a lot of duplication. We’ve already seen this in the SOA days. Modern API Gateways should, therefore, be pluggable to control planes that take care of things like API lifecycle management and gateway infra management. Phone-home: This is a key feature and one that still not many modern gateways support. The ability for an API gateway to stablish the communication to the management tier via the control plane (Phone-home) using standard ports is key in hybrid architectures to avoid networking and other security constraints. Enterprise API Management, I think, provides a pretty comprehensive overview of what modern API platforms look like and how to differentiate them with more traditional ones. Common mistakes in API management What are the common mistakes people make in API management? LW: Throughout my time as an API strategist and practitioner I’ve seen many mistakes and also made some myself. The important thing is being able to recognize what they are and learnt from them. The top 3 that come to my mind: Thinking that API management is just about implementing a product or tools without having business and customer value at the epicentre of the API strategy. (Sometimes there even isn’t an API strategy.) This is perhaps the most common one, and one that happened a lot in the old SOA days…unfortunately still occurs in the modern API-led era. My book, Enterprise API Management, can be used as the guideline on how to avoid making an API management initiative less about tools, and more about business/customer value, people, and processes. Thinking that all APIs are the same and therefore treating them all the same way. In some cases, this just happens accidentally, in other cases this happens to avoid ‘layering’ APIs because ‘microservices architectures and practitioners say so’. The matter of fact is, that an API that is built specifically in support of a given mobile application will be less generic and less suited for its used outside of the ‘context’ on which it was built, as compare to an API that was built without any specific consuming application in mind (and thus is not coupled to any application lifecycle). Adopting the wrong organizational model to provide API capabilities across the enterprise. Foor example, this could be a model that centralizes all API efforts and capabilities thus becoming a bottleneck and eventually becoming slow (aka traditional IT). Modern API initiatives should think about adopting platforms models with self-service at the epicentre. In addition to the above 3, there are many common pitfalls when it comes to API architecture and design. However, to cover these I strongly recommend my talk on the 7 deadly sins of API design... https://www.youtube.com/watch?v=Sx2_etbb9JA API management and DevOps What are your thoughts about 3rd generation API management having huge impact on DevOps? LW: Succeeding in modern API management and microservices architectures requires changes beyond technology and also requires diving deep into the organization and its culture. It means moving away from traditional project-based deliveries wherein teams assemble just for the duration of a project and hand over the delivered software (e.g. an API and related services) to different support teams. Instead, move towards a product-based organization wherein teams are assemble around business capabilities and retain accountability and ownership through the entire life cycle of the product. This fundamental change of approach in delivering software means that there is no longer a split between development and operation teams, as a product team has full ownership and accountability over its product. With that said, in order to avoid (re)building these product teams and maintaining core IT capabilities from scratch (e.g. API platforms and service runtimes), a platform operating model can be adopted. This model can offer common IT capabilities, although in a decentralized, on-demand, and self-service way. And for me accomplishing the above is true DevOps. It is at this point that organizations can become more agile and can truly increase their time to markets. What were your goals and objectives in this book, and how well do you feel you achieved them? LW: When I started defining and implementing API and microservices strategies in large enterprises (many of them Fortune 500), although there was plentiful of content around to get inspiration from (much of this content referenced in my book), I had to literally go through several articles, books, videos, and others in order to conceive a top-down, business-led approach towards delivering end-to-end API and microservices strategies. When I say end to end, it doesn’t mean just defining PowerPoints and lengthy Word documents explaining how to deliver API/Microservices strategies and then just walking away. Or worst, sitting on the side with an opinion but no accountability (unfortunately, only too common in the consulting world - lots of senior consultants with strong opinions but who have little or no real practical knowledge and experience). Rather it means walking the talk, defining the strategy, and also delivering it with all of its implications. With this book, I, therefore, wanted to share to the community an approach that I created, evolved through the years, and have seen working. It’s not just theory, but a mix of theory with practice. It’s not just ideas, but ideas that I have put into practice. This book is about sharing my real-life experiences and approach in delivering API and microservices strategies. Therefore, I think (or hope) that I have accomplished my goals with this book. I felt that there is great stuff out there focused on specific things of the “end to end” but not the actual “end to end,” which is what I wanted to cover in this book. I didn’t want to be too high level or too detailed. I wanted to give something to multiple audiences, as it requires multiple audiences (technical and non-technical) working together in order to successfully deliver API management. Ultimately, the readers will be the judge, but I think I have accomplished my goals with this book. Find Enterprise API Management on the Packt store. Read the first chapter for free on Packt's subscription platform.
Read more
  • 0
  • 0
  • 10502

article-image-why-asp-net-core-is-the-best-choice-to-build-enterprise-web-applications-interview
Vincy Davis
30 Dec 2019
9 min read
Save for later

Why ASP.Net Core is the best choice to build enterprise web applications [Interview]

Vincy Davis
30 Dec 2019
9 min read
ASP.NET Core, the cross-platform and open-source framework is developed by Microsoft for building modern, cloud-based, and internet-connected applications. Designed to enable runtime components, APIs, compilers, and languages to evolve quickly, it runs on macOS, Linux, and Windows on the .NET Core or .NET Framework. To know more about the development cycle of ASP.NET Core and to gain knowledge of its future design directions, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’, published by Packt Publishing. He has more than 14 years of professional experience and is working as a software engineering contractor/consultant for client organizations based in South Africa, Australia, U.S.A, and Canada. Kenneth believes that the current performance of ASP.NET Core is a lot more superior than its predecessors and its competitor frameworks. He prefers to use ASP.Net Core to build enterprise web applications due to the flexibility that comes with it. He is also excited that .Net 5 will have more interoperability with other programming languages. When asked about his thoughts on Microsoft supporting the open source platform Pulumi, Kenneth says it will definitely help developers in building modern cloud applications. If you are an ASP.NET Core user, you should definitely read part 1 of our interview with ‘Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0’. In this interview, he shares his impressions on all the new exciting features in the ASP.NET Core 3.0 release and explains why all ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release.   Here is the full interview with Kenneth on ASP.Net Core. On why ASP.NET Core is the best option for web application development What makes .NET Core, one of the best general-purpose development platforms? How does ASP.NET Core enhance the performance of web applications? What do you think are the key benefits of Asp.net Core for enterprise web application development? With .Net Core as a platform, you can develop Web applications, Desktop Applications, Cloud-native applications, mobile applications, gaming applications, Internet of Things (IoT) applications, and Artificial Intelligence (AI) applications, and you probably can’t ask for more from a development platform. ASP.Net Core has gone a long way in making sure that web application performance is enhanced compared to its predecessors or indeed some of its competitor frameworks, for example, by making full use of asynchronous programming models, in which ASP.Net Core has pretty much eliminated the need to have computer processing unit (cycles) that need to be waiting for database queries, web service calls, and IO Operations, and thereby wasting precious resources.  ASP.Net Core was designed from the ground up, unifying both the MVC and WebAPI frameworks. It has removed the dependency on IIS, removed several other excess baggage, including a preload of third party libraries, and as a result, it is much more lightweight and fast, gaining performance along the way. We can say a lot of things on performance including its improved capability with output caching, and other features, but the truth of the matter is the fact that it is getting more performant by the day. There is actually a tool you can use to track its performance metrics through TechEmpower benchmarks publicly available through the web. ASP.Net Core is my choice to build enterprise web applications on, mainly because of its flexibility that comes from it being cross-platform. It starts all the way from the tooling available to be able to develop ASP.Net Core applications using Visual Studio, or Visual Studio Code on either Windows or Mac operating systems, even on Linux.  Within an enterprise, you will have people with different roles working on an enterprise application, and the wide tooling available just makes it convenient to cater to a diverse group of project members.  ASP.Net Core has such a vibrant community that it is always allowed to give their input. The fact that it is open source actually paves way for faster improvements and applicability across industries. Apart from the development environment, when ASP.Net Core applications are ready to be deployed into production, you can do so internally in your organization, or just about any other worthwhile cloud hosting service provider including Azure and AWS. (Read chapter 3 of my book for more details on creating a continuous integration pipeline with Azure DevOps). It’s easy from ASP.Net Core to interact with other applications developed with other external tech stacks, and typically an enterprise application will need to talk to several other applications and I’m personally excited with the fact that a future version of the .Net Core runtime that ASP.Net Core runs on, that will be called .Net 5, is slated to have more interoperability with other languages like Java, Objective C, and Swift. There are many more advantages of using ASP.Net Core that comes to mind, and we can take the whole day discussing them, but to cut a long story short, ASP.Net Core will not disappoint you, and it’s moving fast in terms of improvements on where it is lacking. Recently, Microsoft announced that .NET Core will support the open source platform Pulumi for building modern cloud applications. This aims to help developers to declare cloud infrastructure including all of Azure such as Kubernetes and CosmosDB using any  .NET languages like C#, VB.NET, and F#. To what extent and how do you think Pulumi with .NET will help developers?  There are those that are not so familiar or not so comfortable navigating the cloud infrastructure, or just can’t be bothered to learn something new, and instead of getting out of their comfort zone that is within the code base, they can declare everything through code, for example, resource groups and everything else that makes up the cloud infrastructure. Pulumi just makes everything a bit easier, abstracts away everything and replaces the need to use different tools to create our cloud infrastructure. For example, to come up with JSON, YAML files or coming up with a cloud Domain Specific Language (DSL). Instead of all that we can just declare it using the language that we are already known as developers. It will definitely be handy.  On ASP.NET Core’s longevity and future design directions   At the NDC conference held recently, Ryan Nowak, a Microsoft developer and architect on ASP.NET Core shared the details of many future projects like BedRock, Houdini and SMALL FAST.NET Server. The common goal of these projects is to simplify cross-platform compatibility among different environments. How do you think these projects will help in shaping the future design directions of .NET 5 and ensure the longevity of ASP.NET Core?  .Net 5 is already in the process of being put together, with the full knowledge of what is happening around project Bedrock, project Houdini and SMALL FAST.NET server. What I can personally see from project Bedrock is the fact that starting at the lowest layer there is going to be more prominence of .Net Sockets in dealing with Network I/O at the expense of Libuv borrowed from NodeJS for its cross-platform capabilities. Obviously .Net Sockets will learn a thing or two from how Libuv has been operating and implement the lessons learned so that it works seamlessly with .Net technologies, and .Net 5 will stand to benefit a lot from the improvements.  I personally see .Net 5 being influenced to cater for more protocols like MQTT, AMQP, HTTP3, and QUIC, and I wouldn’t be surprised to even see a bit more interoperability with other programming languages on .Net 5. ASP.Net Core is there to stay as it is designed to work exclusively on the .Net Core runtime, which is transitioning into .Net 5 soon. I can see a lot of improvements on ASP.Net Core 3.0 especially from the point of view of taking a bit more responsibility off the MVC framework onto ASP.Net Core as a platform. This will allow reuse of functionality across different frameworks like SignalR, gRPC services, Blazor, Controllers, and Pages. This is already happening as is evident in the use of endpoint routing, which is catering for all of what I call the big 5 frameworks on project Houdini, mentioned above. Taking away responsibility from MVC to a lower layer actually makes it more lightweight and developer-friendly, and does not actually kill MVC as you can see it is pretty much still alive, and all this restructuring, in general, makes it more flexible to deal with change in the future, that is characteristic of different platforms, and actually makes it more ready in becoming truly cross-platform. About the Book  Get your hands on the book ‘Learn ASP.NET Core 3.0, Second edition’ by Packt Publishing to become highly efficient in developing and maintaining powerful web applications. It will also guide you on how to deploy and monitor your applications using Microsoft Azure, AWS, and Docker. This book will take you through realistically practical ASP.Net Core MVC application helping to give you a feel of how they would work in a real-life scenario. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 16148
article-image-kenneth-fukizi-on-the-new-blazor-framework-grpc-support-and-other-exciting-features-in-asp-net-core-3-0
Vincy Davis
30 Dec 2019
9 min read
Save for later

Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0

Vincy Davis
30 Dec 2019
9 min read
The open-source framework ASP.NET Core is one of the most popular web frameworks, developed by Microsoft and its community. The modular framework runs on both the full .NET Framework, on Windows, and the cross-platform .NET Core. One of the most captivating features of this framework is its performance. It is not only faster than other web frameworks but is also perfectly suited for Docker containers. In November, Microsoft released the new version of ASP.NET Core with many exciting features. To understand the advances in ASP.NET Core, more closely, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’.  Along with ASP.NET Core, Kenneth has also shared his thoughts on the latest version of .NET Core, and the now available Entity Framework Core 3.0 and Entity Framework 6.3. He says that he is personally most excited about the new Blazor framework as it allows him to avoid JavaScript. He also opines that Blazor will give developers a chance to specialize in Microsoft technologies. Kenneth also adds that ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release. Kenneth’s take on the features in ASP.NET Core 3.0 In your book, ‘Learning ASP.NET Core 3.0, Second edition’, you say that ‘Model View Controller’ and ‘Entity Framework Core 3’ are the most widely used frameworks in ASP.NET Core. Can you elaborate on this? How do these frameworks enhance web application performance? Pretty much every worthwhile application out there will need to have some interface for user interaction and persist some form of data.  The Model View Controller is almost a default choice for may web application developers, mainly for its tried and tested versatility and its ability to separate concerns, allowing front-end developers to work on the views while back end developers are busy with their part. This allows for fast application development.  When persisting some data, Entity Framework core 3.0 is often the choice for most application developers on ASP.Net Core, because it is a safer option from a horde of other OR/M engines, since it is developed and maintained by Microsoft itself. Even though some time back it was found wanting in some areas compared to versatile OR/Ms like Nhibernate, Entity Framework Core has addressed its gaps quickly and effectively continues to grow in areas it was behind on. Its seamless integration with the rest of the framework makes it an easy choice for many application developers. ASP.Net Core MVC supports asynchronous calls, thereby releasing unnecessarily held resources and in turn increasing application performance. Since everything is logically separated into groups, in other words, high cohesion and low coupling, it increases the application performance on the whole.  If we talk about the advantages it gives to the application developer, there are many, and most of these allow for a developer to have code that does not repeat itself unnecessarily, code that has low coupling, allowing for everything to be tested individually. In general, it allows for an application to have efficient code more easily and in turn, makes the application more performant. (Read chapter 4 and 5 of my book for understanding all the basic concepts of ASP.NET Core 3.0). ASP.NET Core 3.0 has introduced a new framework called Blazor for building interactive client-side web UI with .NET. What are your thoughts on it? This is definitely a game-changer. I’m personally excited that I have an option to use Blazor instead of JavaScript. It gives a chance for a developer to specialize in Microsoft technologies and be truly full-stack, all within the MS tech stack.  For a typical back end C# developer, when they get familiar with C# syntax, type safety, and the environment in general, it becomes easier to just work with one language across the stack, whether it is backend, or frontend. Client-side Blazor with its Ahead Of Time (AOT) Compilation directly to WebAssembly will make client applications super fast, and it’s something to definitely look out for. (Chapter 6 of my book gives a detailed explanation on Blazor). The latest release of ASP.NET Core also supports gRPC and ships with templates and tools for building gRPC services. What are the advantages and disadvantages that gRPC will offer to ASP.NET Core 3 users? Also, what are your thoughts on the gRPC built-in security features? ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC. With gRPCs, a contract-first approach to API development is possible, documentation for projects will become easier, making it easier for different teams on the same project to be able to communicate better, with consistent API models as well. Reusability will be enhanced and encouraged by the gRPC templates. For applications using Microservices, it will be advantageous in terms of security and performance to use gRPC mainly for communication. Since gRPC uses protocol buffers, as opposed to REST services, they need to be exposed to users in JSON or XML via HTTP.  The HTTP/2 protocol that gRPC uses is more efficient than its counterpart HTTP 1.1. With gRPC templates, we are able to have messages that flow in a bi-directional way, increasing efficiency, and in case of an event, we can do cancellations of sent requests. There are indeed some disadvantages to using gRPC templates including the fact that it now becomes a duty to maintain specification files, which are an integral part of the template. If you have different teams you will have to agree first on specifications before you even start to implement anything, and that can make things slow in terms of getting the application development done. It is great to see that gRPC has visibly built-in considerations for TLS/ SSL and that it makes sure that all its communications are first authenticated and encrypted. This goes a long way in not only preventing but acts as a deterrent to intended attacks on application services. Have you had a chance to explore .NET Core 3.0, and the now available Entity Framework Core 3.0 and Entity Framework 6.3? What are you most excited about this new release? Yes, I have managed to explore .Net Core 3.0, and admittedly with great pleasure to do so. Some of the most exciting features I have seen the introduction of support for Cosmos DB and C# 8, and that in itself makes a whole lot of development easier for global applications. LINQ has always been quite handy to me and am sure many developers, but there have been scenarios where some queries have not been as efficient I’d want them, and to hear and see that it has been re-architected in many ways to produce more efficient queries, it is absolutely exciting, and I would love to uncover more of its query capabilities aimed to be improved upon in the future.  Nullable reference types introduced for both C#8 and Entity Framework Core 3.0, will make my life simpler as a developer. I’m also excited that Entity Framework 6.3 is the first version of Entity Framework 6 that will be able to run on .Net Core, and this will make it simpler for migrating older applications that were using Entity Framework 6, onto the .Net Core platform. (Chapter 9 of the book gives details about accessing data using Entity Framework Core 3). We also asked Kenneth about his preferred way of improving the speed and rendering performance of any web application. Bundling which combines multiple files into a single file reduces the size of the JavaScript or CSS file by removing white space and commented code without altering any functionality. On the other hand, minification can perform multiple varieties of different code optimizations to scripts and CSS, thus resulting in smaller payloads.  When combined together, they both can improve the load time performance of an application by reducing the number of requests to the server and reducing the size of the requested assets.  My personal way of doing things, when I need to use some functionality from a package, is first to look internally within the development framework. Only when an implementation of that functionality cannot be found, or when it is evident of its deficiencies with regards to the task at hand, do I go outside of the primary provider Microsoft. Therefore, it is only natural that I prefer to use the out of the box solution provided by both the MVC and Razor pages that makes use of bundleconfig.json I have at times gone for the sophistication that is in Grunt, and in Webpack, but there being relatively a bit more complex naturally makes the inbuilt functionality as the first option, as long as it doesn’t horribly fail. About the Book  ‘Learn ASP.NET Core 3.0, Second edition’ will help you become highly efficient in developing and maintaining powerful web applications. It will also guide you to deploy and monitor the applications using Microsoft Azure, AWS, and Docker. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 6583

article-image-understand-quickbooks-online-desktop-online-security-use-cases-and-more-with-crystalynn-shelton-a-certified-quickbooks-proadvisor
Vincy Davis
27 Dec 2019
8 min read
Save for later

Understand Quickbooks online/desktop, online security, use cases, and more with Crystalynn Shelton, a certified QuickBooks ProAdvisor

Vincy Davis
27 Dec 2019
8 min read
Quickbooks, the accounting software package developed and marketed by Intuit is targeted towards small and medium-sized businesses. It offers on-premises accounting applications and cloud-based versions that can undertake remote access capabilities like remote payroll assistance and outsourcing, electronic payment functions, online banking, and reconciliation, mapping features, and more. To know more about Quickbooks’ latest features and its learning curve for beginners, we did a quick interview with Crystalynn Shelton, a certified QuickBooks ProAdvisor and author of the book ‘Mastering QuickBooks 2020’. With more than 10 years of experience in Quickbooks, Shelton says Quickbooks is not only user-friendly but also cost-effective. Further, when asked about her views on QuickBooks online, Shelton points out that its live unlimited technical support is one of its main features  On Quickbooks, its benefits and use cases What are some of the advantages of Quickbooks that sets it apart from its competitors?  QuickBooks has a number of advantages that set them apart from its competitors. First, it is affordable for most small businesses. Whether you purchase an Online subscription (starting at $20/month) or a desktop product (starting at a one-time fee of $199), there is something for every budget. Another benefit of using QuickBooks is the program is very user-friendly. Most small business owners purchase the software and are able to set it up without having an IT person on staff.  In addition, there are a number of training videos, an extensive help menu within the program not to mention live tech support if you need it. Because QuickBooks is the most widely used accounting software program used by small businesses, most accountants and CPAs are familiar with the program. Some of these folks are certified ProAdvisors (like myself). They can offer consulting, training, and even bookkeeping services to small business owners who use QuickBooks. Can you elaborate on how small businesses can take benefit from Quickbooks? Also, how does Quickbooks simplifies tasks for them?  While there are numerous reasons why small businesses decide to use QuickBooks, there are five that tend to be the most common reasons: small businesses who can’t afford to hire a bookkeeper, small businesses who have outgrown the use of Excel spreadsheets and need a more sophisticated way to track income and expenses, small businesses who need financial statements in order to apply for a line of credit or business loan, small businesses whose tax professional will no longer accept a shoebox of receipts to file taxes. QuickBooks simplifies bookkeeping by allowing you to track all aspects of the business in one place: accounts payable, accounts receivable, income, and expenses. It uses simple language such as “people who owe you” (aka accounts receivable) or “what you owe to others” (aka accounts payable) to help business owners without prior bookkeeping knowledge comprehend the program. QuickBooks allows you to accept credit card payments from customers so you can get paid faster and easily reconcile payments to open invoices. Not to mention you can reduce (if not eliminate) manual data entry by connecting all of your business bank and credit card accounts to QBO.  Can you elaborate on how your book ‘Mastering QuickBooks 2020’ will prepare bookkeepers and accounting students in learning the ropes of QuickBooks? Also, how does the learning curve look like for users who have no bookkeeping knowledge and no experience with QuickBooks? This book was written with the assumption that the reader has no experience or knowledge of bookkeeping. We use simple language to explain how QuickBooks works and we have also provided screenshots to support the concepts being taught.  Chapter 1 includes a section that covers bookkeeping basics which will help non-accountants gain a better understanding of the terminology used in the field of accounting as well as QuickBooks. This information will help aspiring accountants build on their existing bookkeeping knowledge.  In addition, we have included the behind the scenes debits and credits for certain transactions to help accounting students prepare for the CPA exam or other academic tests. Shelton’s views on QuickBooks Online and Desktop What are your thoughts on QuickBooks Online and Quickbooks Desktop? What are the benefits of cloud accounting over Desktop? Do factors such as the size of an organization, or its maturity matter in choosing between the online and the desktop version? There are several benefits of using cloud accounting software over the desktop. Cloud accounting software allows you to manage your business from any device with an internet connection; whereas desktop limits you to a desktop computer. With cloud accounting software like QuickBooks Online, you can give anyone access to your QuickBooks data without them having to travel to your office. Cloud accounting software includes automatic real-time updates of your data. Unlike desktop software, you don’t have to worry about backing up your data with Online; its automatically done for you. Finally, QuickBooks Online includes unlimited live technical support. This is an invaluable feature for small business owners who are managing their own books and need the ability to get help when they need it. The size of an organization, structure, and length of time in business can definitely impact whether a business should choose QuickBooks Online or desktop. As a QuickBooks ProAdvisor, one of the first things I do is conduct an assessment to determine what the needs of my clients are. This involves documenting the details of their current processes (i.e. invoicing customers, paying bills, managing inventory, etc.)  Once I have this information, I am able to determine whether QuickBooks desktop is right or if QuickBooks Online is the best fit. If both products are ideal, I provide my clients with the downsides (if any) of going with one product over the other. This gives my clients all of the information they need to make an informed decision. On how Quickbook secures online data How does Quickbooks help in securing payments? How does QuickBooks keep online data safe? To secure payments, intuit transmits, support, protect, and access all cardholder information in compliance with the Payment Card Industry’s (PCI) data security standards. Additional security precautions Intuit has implemented are as follows: All data between Intuit servers and their customers is encrypted with at least 128-bit TLS, and all copies of daily backup data are encrypted with 256-bit AES encryption. Data is kept secure with multiple servers housed in Tier-3 data centers that have strict access controls and real-time video monitoring of the data center. All servers are hardened Linux installations, which are monitored in real-time and kept up-to-date with security patches. Can you suggest some best practices (at least five) that will help Quickbook aspirants in saving time and becoming a Quickbook pro? There are several ways you can save time and become proficient in QuickBooks Online. First, I recommend that you use QuickBooks on a daily basis. The more hands-on experience with QuickBooks, the more proficient you will become. Second, take the time to properly set up your QuickBooks account before you start entering transactions.  In Chapter 2, we provide you with a detailed checklist which includes what information you need to setup QuickBooks. By taking the time to set up customers, vendors, the chart of accounts, and your products and services upfront, the less time you will spend having to do it later on when you are trying to enter data. Third, all aspiring bookkeepers and accountants should get certified in QuickBooks Online. Certification is offered through Intuit and it is free.  As a Certified QuickBooks ProAdvisor, you get access to product discounts, marketing materials to promote bookkeeping services to prospective clients, a certification badge and designation you can put on business cards, websites and email signature lines.  Fourth, utilize keyboard shortcuts. They will save you time as you navigate the program. We have included a list of QBO keyboard shortcuts in the appendix of this book. Finally, connect as many bank and credit card accounts as you can to QBO. By doing so, you will reduce the amount of manual data entry required which will help you to keep your books up-to-date. If you want to learn how to build the perfect budget, simplify tax return preparation, manage inventory, track job costs, generate income statements and financial reports, check out Crystalynn’s book ‘Mastering QuickBooks 2020’. This book will work for a small business owner, bookkeeper, or accounting student who wants to learn how to make the most of QuickBooks Online. About the author Crystalynn Shelton is a licensed Certified Public Accountant, a certified QuickBooks ProAdvisor and has been certified in QuickBooks for more than 10 years. Crystalynn is currently a staff writer for Fit Small Business and an Adjunct Instructor at UCLA Extension where she teaches accounting, bookkeeping and QuickBooks to hundreds of small business owners and accounting students each year. Her previous experience includes working at Intuit (QuickBooks) as a Sr. Learning Specialist.  MongoDB’s CTO Eliot Horowitz on what’s new in MongoDB 4.2, Ops Manager, Atlas, and more New QGIS 3D capabilities and future plans presented by Martin Dobias, a core QGIS developer Greg Walters on PyTorch and real-world implementations and future potential of GANs Elastic marks its entry in security analytics market with Elastic SIEM and Endgame acquisition “The challenge in Deep Learning is to sustain the current pace of innovation”, explains Ivan Vasilev, machine learning engineer
Read more
  • 0
  • 0
  • 5145