Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

122 Articles
article-image-why-asp-net-core-is-the-best-choice-to-build-enterprise-web-applications-interview
Vincy Davis
30 Dec 2019
9 min read
Save for later

Why ASP.Net Core is the best choice to build enterprise web applications [Interview]

Vincy Davis
30 Dec 2019
9 min read
ASP.NET Core, the cross-platform and open-source framework is developed by Microsoft for building modern, cloud-based, and internet-connected applications. Designed to enable runtime components, APIs, compilers, and languages to evolve quickly, it runs on macOS, Linux, and Windows on the .NET Core or .NET Framework. To know more about the development cycle of ASP.NET Core and to gain knowledge of its future design directions, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’, published by Packt Publishing. He has more than 14 years of professional experience and is working as a software engineering contractor/consultant for client organizations based in South Africa, Australia, U.S.A, and Canada. Kenneth believes that the current performance of ASP.NET Core is a lot more superior than its predecessors and its competitor frameworks. He prefers to use ASP.Net Core to build enterprise web applications due to the flexibility that comes with it. He is also excited that .Net 5 will have more interoperability with other programming languages. When asked about his thoughts on Microsoft supporting the open source platform Pulumi, Kenneth says it will definitely help developers in building modern cloud applications. If you are an ASP.NET Core user, you should definitely read part 1 of our interview with ‘Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0’. In this interview, he shares his impressions on all the new exciting features in the ASP.NET Core 3.0 release and explains why all ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release.   Here is the full interview with Kenneth on ASP.Net Core. On why ASP.NET Core is the best option for web application development What makes .NET Core, one of the best general-purpose development platforms? How does ASP.NET Core enhance the performance of web applications? What do you think are the key benefits of Asp.net Core for enterprise web application development? With .Net Core as a platform, you can develop Web applications, Desktop Applications, Cloud-native applications, mobile applications, gaming applications, Internet of Things (IoT) applications, and Artificial Intelligence (AI) applications, and you probably can’t ask for more from a development platform. ASP.Net Core has gone a long way in making sure that web application performance is enhanced compared to its predecessors or indeed some of its competitor frameworks, for example, by making full use of asynchronous programming models, in which ASP.Net Core has pretty much eliminated the need to have computer processing unit (cycles) that need to be waiting for database queries, web service calls, and IO Operations, and thereby wasting precious resources.  ASP.Net Core was designed from the ground up, unifying both the MVC and WebAPI frameworks. It has removed the dependency on IIS, removed several other excess baggage, including a preload of third party libraries, and as a result, it is much more lightweight and fast, gaining performance along the way. We can say a lot of things on performance including its improved capability with output caching, and other features, but the truth of the matter is the fact that it is getting more performant by the day. There is actually a tool you can use to track its performance metrics through TechEmpower benchmarks publicly available through the web. ASP.Net Core is my choice to build enterprise web applications on, mainly because of its flexibility that comes from it being cross-platform. It starts all the way from the tooling available to be able to develop ASP.Net Core applications using Visual Studio, or Visual Studio Code on either Windows or Mac operating systems, even on Linux.  Within an enterprise, you will have people with different roles working on an enterprise application, and the wide tooling available just makes it convenient to cater to a diverse group of project members.  ASP.Net Core has such a vibrant community that it is always allowed to give their input. The fact that it is open source actually paves way for faster improvements and applicability across industries. Apart from the development environment, when ASP.Net Core applications are ready to be deployed into production, you can do so internally in your organization, or just about any other worthwhile cloud hosting service provider including Azure and AWS. (Read chapter 3 of my book for more details on creating a continuous integration pipeline with Azure DevOps). It’s easy from ASP.Net Core to interact with other applications developed with other external tech stacks, and typically an enterprise application will need to talk to several other applications and I’m personally excited with the fact that a future version of the .Net Core runtime that ASP.Net Core runs on, that will be called .Net 5, is slated to have more interoperability with other languages like Java, Objective C, and Swift. There are many more advantages of using ASP.Net Core that comes to mind, and we can take the whole day discussing them, but to cut a long story short, ASP.Net Core will not disappoint you, and it’s moving fast in terms of improvements on where it is lacking. Recently, Microsoft announced that .NET Core will support the open source platform Pulumi for building modern cloud applications. This aims to help developers to declare cloud infrastructure including all of Azure such as Kubernetes and CosmosDB using any  .NET languages like C#, VB.NET, and F#. To what extent and how do you think Pulumi with .NET will help developers?  There are those that are not so familiar or not so comfortable navigating the cloud infrastructure, or just can’t be bothered to learn something new, and instead of getting out of their comfort zone that is within the code base, they can declare everything through code, for example, resource groups and everything else that makes up the cloud infrastructure. Pulumi just makes everything a bit easier, abstracts away everything and replaces the need to use different tools to create our cloud infrastructure. For example, to come up with JSON, YAML files or coming up with a cloud Domain Specific Language (DSL). Instead of all that we can just declare it using the language that we are already known as developers. It will definitely be handy.  On ASP.NET Core’s longevity and future design directions   At the NDC conference held recently, Ryan Nowak, a Microsoft developer and architect on ASP.NET Core shared the details of many future projects like BedRock, Houdini and SMALL FAST.NET Server. The common goal of these projects is to simplify cross-platform compatibility among different environments. How do you think these projects will help in shaping the future design directions of .NET 5 and ensure the longevity of ASP.NET Core?  .Net 5 is already in the process of being put together, with the full knowledge of what is happening around project Bedrock, project Houdini and SMALL FAST.NET server. What I can personally see from project Bedrock is the fact that starting at the lowest layer there is going to be more prominence of .Net Sockets in dealing with Network I/O at the expense of Libuv borrowed from NodeJS for its cross-platform capabilities. Obviously .Net Sockets will learn a thing or two from how Libuv has been operating and implement the lessons learned so that it works seamlessly with .Net technologies, and .Net 5 will stand to benefit a lot from the improvements.  I personally see .Net 5 being influenced to cater for more protocols like MQTT, AMQP, HTTP3, and QUIC, and I wouldn’t be surprised to even see a bit more interoperability with other programming languages on .Net 5. ASP.Net Core is there to stay as it is designed to work exclusively on the .Net Core runtime, which is transitioning into .Net 5 soon. I can see a lot of improvements on ASP.Net Core 3.0 especially from the point of view of taking a bit more responsibility off the MVC framework onto ASP.Net Core as a platform. This will allow reuse of functionality across different frameworks like SignalR, gRPC services, Blazor, Controllers, and Pages. This is already happening as is evident in the use of endpoint routing, which is catering for all of what I call the big 5 frameworks on project Houdini, mentioned above. Taking away responsibility from MVC to a lower layer actually makes it more lightweight and developer-friendly, and does not actually kill MVC as you can see it is pretty much still alive, and all this restructuring, in general, makes it more flexible to deal with change in the future, that is characteristic of different platforms, and actually makes it more ready in becoming truly cross-platform. About the Book  Get your hands on the book ‘Learn ASP.NET Core 3.0, Second edition’ by Packt Publishing to become highly efficient in developing and maintaining powerful web applications. It will also guide you on how to deploy and monitor your applications using Microsoft Azure, AWS, and Docker. This book will take you through realistically practical ASP.Net Core MVC application helping to give you a feel of how they would work in a real-life scenario. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 16203

article-image-mongodb-popular-nosql-database-today
Amey Varangaonkar
23 Jan 2018
12 min read
Save for later

Why MongoDB is the most popular NoSQL database today

Amey Varangaonkar
23 Jan 2018
12 min read
If NoSQL is the king, MongoDB is surely its crown jewel. With over 15 million downloads and counting, MongoDB is the most popular NoSQL database today, empowering users to query, manipulate and find interesting insights from their data. Alex Giamas is a Senior Software Engineer at the Department for International Trade, UK. Having worked as a consultant for various startups, he is an experienced professional in systems engineering, as well as NoSQL and Big Data technologies. Alex holds an M. Sc., from Carnegie Mellon University in Information Networking and has attended professional courses in Stanford University. He is a MongoDB-certified developer and a Cloudera-certified developer for Apache Hadoop & Data Science essentials. Alex has worked with a wide array of NoSQL and Big Data technologies, and built scalable and highly available distributed software systems in C++, Java, Ruby and Python. In this insightful interview with MongoDB expert Alex Giamas, we talk about all things related to MongoDB - from why NoSQL databases gained popularity to how MongoDB is making developers’ and data scientists’ work easier and faster. Alex also talks about his book Mastering MongoDB 3.x, and how it can equip you with the tools to become a MongoDB expert! Key Takeaways NoSQL databases have grown in popularity over the last decade because they allow users to query their data without having to learn and master SQL. The rise in popularity of the Javascript-based MEAN stack meant many programmers now prefer MongoDB as their choice of database. MongoDB has grown from being just a JSON data store to become the most popular NoSQL database solution with efficient data manipulation and administration capabilities. The sharding and aggregation framework, coupled with document validations, fine-grained locking, a mature ecosystem of tools and a vibrant community of users are some of the key reasons why MongoDB is the go-to database for many. Database schema design, data modeling, backup and security are some of the common challenges faced by database administrators today. Mastering MongoDB 3.x focuses on these common pain points of the database administrators and shows them how to build robust, scalable database solutions with ease. NoSQL databases seem to have taken the world by storm, and many people now choose various NoSQL database solutions over relational databases. What do you think is the reason for this rise in popularity? That's an excellent question. There are several factors contributing to the rise in popularity for NoSQL databases. Relational databases have served us for 30 years. At some point we realised that the one size fits all model is no longer applicable. While “software is eating the world” as Marc Andreessen has famously written, the diversity and breadth of use cases we use software for has brought an unprecedented specialisation in the level of solutions to our problems. Graph databases, column-based databases and of course document-oriented databases like MongoDB are in essence specialised solutions to particular database problems. If our problem fits the document-oriented use case, it makes more sense to use the right tool for the problem (e.g. MongoDB) than a generic one-size-fits-all RDBMS. Another contributing factor to the rise of NoSQL databases and especially MongoDB is the rise of the MEAN stack, which means Javascript developers can now work from frontend to backend and database. Last but not the least, more than a generation of developers have struggled with SQL and its several variations. The promise that one does not need to learn and master SQL to extract data from the database but can rather do it using Javascript or other more developer friendly tools is just too exciting to pass on. MongoDB struck gold in this aspect, as Javascript is one of the most commonly used programming languages. Using Javascript for querying also opened up database querying to the front end developers which I believe has driven adoption as well. MongoDB is one of the most popular NoSQL databases out there today, and finds application in web development as well as Big Data processing. How does MongoDB aid in effective analytics? In the past few years we have seen the explosive growth of generated data. 80% of the world’s data has been generated in the past 3 years and this will continue to happen even more in the near future with the rise of IoT. This data needs to be stored and most importantly analysed to derive insights and actions. The answer to this problem has been to separate the transactional loads from the analytical loads into OLTP and OLAP databases respectively. Hadoop ecosystem has several frameworks that can store and analyse data. The problem with Hadoop data warehouses/data lakes however is threefold. You need experts to analyse data, they are expensive and it’s difficult to get quickly the answers to your questions. MongoDB bridges this gap by offering efficient analytics capabilities. MongoDB can help developers and technical people get quick insights from data that can help define the direction of research for the data scientists working on the data lake. By utilising tools like the new charts or the BI connector, data warehousing and MongoDB are converging. MongoDB does not aim to substitute Hadoop-based systems but rather complement them and decrease the time to market for data-driven solutions. You have been using MongoDB since 2009, way back when it was in its 1.x version. How has the database has evolved over the years? When I started using MongoDB, it was not much more than a JSON data store. It’s amazing how far MongoDB has come in these 9 years in every aspect. Every piece of software has to evolve and adapt to the always changing environment. MongoDB started off as the JSON data store that is easy to setup and use while being blazingly fast with some caveats. The turning point for MongoDB early in its evolution was introducing sharding. Challenging as it may be to choose the right shard key, being able to horizontally scale using commodity hardware is the feature that has been appreciated the most by developers and architects throughout all these years. The introduction of aggregation framework was another turning point for MongoDB since it allowed developers to build data pipelines using MongoDB data, reducing time to market. Geospatial related features were there from an early point in time and actually one of MongoDB’s earliest and most visible customers, FourSquare was a vivid user of geospatial features in MongoDB. Overall, with time MongoDB has matured and is now a robust database for a wide set of use cases. Document validations, fine grained locking, a mature ecosystem of tools around it and a vibrant community means that no matter the language, state of development, startup or corporate environment, MongoDB can be evaluated as the database choice. There have been of course features and directions that didn’t end up as well as we were originally hoping for. A striking example is the MongoDB MapReduce framework which never lived up to the expectations of developers using MapReduce via Hadoop and has gradually been superseded by the more advanced and more developer-friendly Aggregation framework. What do you think are the most striking features of MongoDB? How does it help you in your day to day activities as a Senior Software Engineer? In my day to day development tasks I almost always use the Aggregation framework. It helps to quickly prototype a pipeline that can transform my data to a format that I can then collaborate with the data scientists to derive useful insights in a fraction of the time needed by traditional tools. Day to day or sprint to the next sprint - what you want from any technology is to be reliable and not get in your way but rather help you achieve the business goals. With MongoDB we can easily store data in JSON format, process it, analyse it and pass it on to different frontend or backend systems without much hassle. What are the different challenges that MongoDB developers and architects usually face while working with MongoDB? How does your book 'Mastering MongoDB 3.x' help, in this regard? The major challenge developers and architects face when choosing to work with MongoDB is the database design. Irrespective of whether we come from an RDBMS or a NoSQL background, designing the database such that it can solve our current and future problems is a difficult task. Having been there and struggled with it in the past, I have put emphasis on how someone coming from a relational background can model different relationships in MongoDB. I have also included easy to understand and follow checklists around different aspects of MongoDB. Backup and security is another challenge that users often face. Backups are many times ignored until it’s too late. In my book I identify all available options and the tradeoffs they come with, including cloud-based options. Security on the other hand is becoming an ever increasing concern for computing systems with data leaks and security breaches happening more often. I have put an emphasis on security both in the relevant chapters and also across most chapters by highlighting common security pitfalls and promoting secure practices wherever possible. MongoDB has commanded a significant market share in the NoSQL databases domain for quite some time now, highlighting its usefulness and viability in the community. That said, what are the 3 areas where MongoDB can get better, in order to stay ahead of its competition? MongoDB has conquered the NoSQL space in terms of popularity. The real question is how/if NoSQL can increase its market share in the overall database market. The most important area of improvement is interoperability. What developers get with popular RDBMS is not only the database engine itself, but also easy ways to integrate it with different systems, from programming frameworks to Big Data and analytics systems. MongoDB could invest heavier in building these libraries that can make a developer’s life easier. Real-time analytics is another area with huge potential in the near future. With IoT rapidly increasing the data volume, data analysts need to be able to quickly derive insights from data. MongoDB can introduce features to address this problem. Finally, MongoDB could improve by becoming more tunable in terms of the performance/consistency tradeoff. It’s probably a bit too much to ask from a NoSQL database to support transactions as this is not what it was designed to be from the very beginning, but it would greatly increase the breadth of use cases if we could sparingly link up different documents and treat them as one, even with severed performance degradation. Artificial Intelligence and Machine Learning are finding useful applications in every possible domain today. Although it's a database, do you foresee MongoDB going the Oracle way and incorporating features to make it AI-compatible? Throughout the past few years, algorithms, processing power and the sheer amount of data that we have available have brought a renewed trust in AI. It is true that we use ML algorithms in almost every problem domain, which is why every vendor is trying to make the developer’s life easier by making their products more AI-friendly. It’s only natural for MongoDB to do the same. I believe that not only MongoDB but every database vendor will have to gradually focus more on how to serve AI effectively, and this will become a key part of their strategy going ahead. Please tell us something more about your book 'Mastering MongoDB 3.x'. What are the 3 key takeaways for the readers? Are there any prerequisites to get the most out of the book? First of all, I would like to say that as a “Mastering” level book we assume that readers have some basic understanding of both MongoDB and programming in general. That being said, I encourage readers to start reading the book and try to pick up the missing parts along way. It’s better to challenge yourself than the other way around. As for the most important takeaways, in no specific order of importance: Know your problem. It’s important to understand and analyse as much as possible the problem that you are trying to solve. This will dictate everything, from data structures, indexing, to database design decisions, to technology choices. On the other hand, if the problem is not well defined then this may be the chance to shine for MongoDB as a database choice as we can store data with minimal hassle. Be ready to scale ahead of time. Whether that is replication or sharding, make sure that you have investigated and identified the correct design and implementation steps so that you can scale when needed. Trying to add an extra shard when load has already peaked in the existing shards is neither fun, nor easy to do. Use aggregation. Being able to transform data in MongoDB before extracting it for processing in an external database is a really important feature and should be used whenever possible, instead of querying large datasets and transforming their data in our application server. Finally, what advice would you give to beginners who would like to be an expert in using MongoDB? How would the learning path to mastering MongoDB look like? What are the key things to focus on in order to master data analytics using MongoDB? To become an expert in MongoDB, one should start by understanding its history and roots. They should understand and master schema design and data modelling. After mastering data modelling, the next step would be to master querying - both CRUD and more advanced concepts. Understanding the aggregation framework and how or when to index would be the next step. With this foundation, one can then move on to cross-cutting concerns like monitoring, backup and security, understanding the different storage engines that MongoDB supports and how to use MongoDB with Big Data. All this knowledge should then provide a strong foundation to move on to the scaling aspects like replication and sharding, with the goal of providing effective fault tolerance and high availability systems. Mastering MongoDB 3.x explains these topics in this order with the intention of getting from beginner to expert in a structured and easy to follow and understand way.
Read more
  • 0
  • 0
  • 15446

article-image-is-devops-experiencing-an-identity-crisis-interview
Packt Editorial Staff
07 Jan 2020
7 min read
Save for later

Is DevOps experiencing an identity crisis? [Interview]

Packt Editorial Staff
07 Jan 2020
7 min read
The definition of DevOps is a hotly disputed topic among amateur practitioners and experienced engineers alike. Ironically, DevOps was actually supposed to bring some order into the messy and chaotic environment of IT software development. In DevOps Paradox, DevOps expert Viktor Farcic talks to fellow industry figures who reveal their perspectives on the trend and what it means to them. In this article, we’ll see what some prominent people in the DevOps community have to say about DevOps. The quotes in this article are taken directly from the book. So, how are we supposed to incorporate DevOps into our organizations if we don't even know what it is? Let’s hear Viktor’s thoughts about what DevOps is and what are it’s trends and future aspects: What is DevOps and why do we need it? What is DevOps and why do we need it? What is the most important thing DevOps helps us achieve? What are the factors that drive the development of DevOps? Viktor Farcic: Almost everyone gives a different answer to the question “What is DevOps?”—there is a huge discrepancy between the idea and the implementation. I believe that the main objective of DevOps is to enable self-sufficient product-oriented teams capable of having a full control of their products. That is in stark contrast with the way many companies operate today. Normally, a lifecycle of an application is split between many teams. Business analysts define requirements, architects work on guidelines that must be followed and frameworks that must be used, developers write code, testers are in charge of validations, operators deploy new releases, and so on and so forth. The problem is that each of those groups belong to different departments and have different and often opposing objectives. Instead of fostering collaboration towards a common goal, different teams (departments) are looking for their own shortsighted interests. DevOps tries to remove organization based on the type of tasks performed and unite all the expertise required for the whole lifecycle of an application into a single team reporting to a single person. It forces us to work together and it builds empathy. Is DevOps a process? Or a set of technologies? What's your perspective on this area of debate? Viktor: DevOps is neither of the two. Unlike some agile frameworks (e.g. Scrum), there is no prescribed process to follow. Similarly, there is no technology we can adapt that will convert us into “DevOps” teams. It is only an idea that developers, operators, and everyone else needs to work together instead of being isolated in different silos. That does not mean that technology is not important; it is, but often for other than obvious reasons. Every new technology is created by a group of people that worked together to create it. As such, it always reflects processes of those involved in creating it. Those processes, on the other hand, are a result of the culture of the people following it. In other words, every tech is a result of certain processes created as a result of the culture of the team that worked on it. So, even though we use an end result, it is a product of a process created in a specific culture. If we adopt a technology that does not match our own processes and culture, it will produce suboptimal results at best. So, we must either adopt technology that matches our processes and culture or use it to change them. One cannot work without the other. All in all, DevOps is an idea, not much more than that. It’s up to us to figure out which processes and technology will help us make it reality. What does a DevOps engineer do? Is it even a real job role? What are the core roles of DevOps Engineers in terms of development and Infrastructure? Viktor: I don't think there is such a thing as a “DevOps Engineer.” The term was invented by people who were not ready to apply the changes DevOps leads to. Most of the time, a “DevOps Engineer” is just a different name for someone working in shared services, operations, infrastructure, or whichever department was first to be renamed into DevOps. Do you think DevOps is experiencing an identity crisis? Viktor: DevOps was never defined as a process. Agile, for example, got quite a few implementations that tell people what to do. Among others, we got Scrum that clearly defines what to do. We could even argue that Scrum, as being a set of practices that must be followed, is against the spirit of Agile, but that’s a conversation for some other time. What matters is that no one defined the process behind DevOps. There is no such thing as a set of steps that must be followed daily or weekly. It is just an idea that we should work together and not throw things over department walls. As such, the way to accomplish that is open to interpretation. So, DevOps never had a clear identity, so it cannot have an identity crisis either. It’s just an idea, and it’s up to each one of us to try to figure out how to make it reality. The biggest challenges in DevOps today What are the biggest challenges in DevOps at the moment? Viktor: Currently, DevOps is mostly misunderstood. More often than not, companies just rename a department. In some companies, shared services become DevOps teams; in others it is infrastructure, operations, or any other department. It’s as if it was a race and the first department to change their name into “DevOps” was a winner. Logically, changing the name means nothing and does not result in any tangible improvement. The key challenges are related to people and culture. DevOps is not easy because it challenges current organizational structure, it restructures power within an organization, and it questions the need for the existence of many departments. As such, middle management is often against it because it is perceived as a risk to their position. At the same time, people who spent many years doing the same thing over and over again feel that their credibility is at risk if the structure that allowed them to climb company ladder is removed. Congratulations on the release of DevOps Paradox. Could you talk a little bit about the idea behind it and what you hope it achieves? Viktor: I go to a lot of conferences and I realized that scheduled talks are not the main takeaway from them. True, I learned things by listening to them, but the primary reason I continue attending are “corridor talks.” Conferences are a great opportunity for me to find interesting people and have amazing discussions. Unlike scheduled talks, those conversations are not structured. I do not prepare a list of questions for the next person I’ll meet in between talks or at a party. Instead, we’d just start talking about a random thing that happens to be interesting. I wanted to bring those types of conversations to people who cannot travel the world and be every moth in a different conference in a different country. So, I did not have any real goals for this book, other than speaking with people about any topic, as long as it is related to DevOps. Since DevOps can be anything related to software development, you can say that the scope of the book is as broad as it can be. My true goal was to enjoy having conversations with people. I did not prepare questions in advance. Instead, I just gathered people I’d like to speak with if I’d meet them in a conference and say, “Let’s have a coffee and see what you were up to since the last time we met”. Some of those I interviewed are my friends, while others I met for the first time. Some work for huge enterprises, while others work in startups. Some worked in software industry for many years, while others are young up-and-coming experts. I wanted to make sure that the book gives as many different opinions as possible. Find Viktor Farcic's DevOps Paradox on the Packt store. Read the first chapter for free on the Packt subscription platform.
Read more
  • 0
  • 0
  • 12334

article-image-bringing-ai-to-the-b2b-world-catching-up-with-sidetrade-cto-mark-sheldon-interview
Packt Editorial Staff
24 Feb 2020
13 min read
Save for later

Bringing AI to the B2B world: Catching up with Sidetrade CTO Mark Sheldon [Interview]

Packt Editorial Staff
24 Feb 2020
13 min read
Sidetrade is an organization that is on a mission to transform customer engagement in the world of B2B marketing with the help of artificial intelligence. With its own AI technology - Aimie - it's now in a strong position to carve out a niche for itself in a market that shows no signs of slowing down. What makes the company even more exciting for us at Packt is that they're just a stones throw away from our offices in Birmingham. To get the lowdown on Sidetrade we spoke to CTO Mark Sheldon about the company's evolution and what the future might hold. Read the interview below... Packt: Tell us a bit about your background and what you're up to today. Mark Sheldon: I started my career as a developer and moved into the management of a large technical team, at one of the ‘big six’ utility companies in the UK. Back in 2013, when the AI buzz was in its infancy, I co-founded a predictive analytics software company called BrightTarget. It was clear there was a better way for B2B organisations to gain more value from their data, and cloud computing and machine learning were clear market changers. In 2017 BrightTarget was acquired by Sidetrade and at this point I became part of their technical leadership team, with the goal of making Sidetrade an AI-driven company. More recently I moved into the Group CTO position (as part of the global leadership team), responsible for more than 85 staff. Sidetrade has a total of 250 staff across six offices in Europe, with expansion planned in 2020. The AI boom and its impact on the B2B landscape Packt: Gartner predicts that this year 30% of B2B companies will use AI to augment at least one of their primary sales processes. What's your take on this? Mark: Yes, only 30%, so this market is still just emerging. Although machine learning has been around for decades, there's still a lot of confusion around AI in many B2B organisations, mostly caused by all of the market and vendor hype. Very few have successfully deployed machine learning and are able to demonstrate value. However, for those that have, the potential for commercial gain deploying AI is huge. The most common processes impacted in sales & marketing are those which involve interactions with customers or prospects at scale; where the decision making of a human can be augmented or improved. e.g. Identifying customers most at risk of churn, customers with the best opportunity to sell more product or prospects with the highest propensity to become a customer. AI really allows sales & marketing teams to optimize their time and marketing spend. BrightTarget and Sidetrade Packt: You co-founded BrightTarget which was acquired by Sidetrade in 2016. Could you tell us a little bit about BrightTarget?   Mark: BrightTarget was founded in 2014, on the principle of helping B2B organisations deploy AI without the need for expensive and hard to find data scientists. We invested significantly in automating the process of data loading, processing (feature generation), model building and monitoring. We achieved strong traction with some large enterprise accounts and were recognized by Forrester as a “Strong Performer”. Packt: How did the acquisition come about? Mark: At this time [when BrightTarget was founded], Olivier Novasque (the Sidetrade CEO and founder) had a clear vision to transform Sidetrade into an AI-driven business. So the acquisition of BrightTarget in November 2016 was a natural fit with the ambitions of Sidetrade and their goals. This has proven to be a great move with the launch of Aimee (AI engine) which has contributed significantly to the subsequent revenue growth following the acquisition. Aimie: Sidetrade's AI technology Packt: Tell us a bit more about Aimie. How does it work? What's the thinking behind it? Mark: Aimie is Sidetrade’s propriety AI technology that helps our customers augment their daily experience within our products. For example, Aimie helps every cash collector make the very optimum collection decisions, even if they have only joined the company two weeks ago! This AI technology is at the heart of our SaaS platforms – Augmented Revenue (helping B2B organisations to manage their Revenue; including managing revenue at risk and finding opportunities to grow revenue from existing customers) and Augmented Cash (again, helping B2B organsisations improve working capital by better cash collection). We also have an unrivalled data lake built up over 20 years. We now have 230 million B2B payment experiences, totaling sales of over 700 billion Euros [£621 bn] which we train our AI on, and enriches our client’s own data. More good quality data for AI to train upon means for better predictions and outcomes. For example (reported in Fortune and Forbes), one of our enterprise clients is Manpower, one of the biggest recruitment firms in the world. With an annual income of €4 bn per year, Manpower France collects 1.3 million receivables from 80,000 companies. To handle this volume, and increasingly complex payment procedures, Manpower’s finance department started using Sidetrade technology in 2013, and introduced Aimie in 2018-19. Manpower started Aimie off with two customer portfolios for a period of two months. Aimie analyzed what worked before for Manpower, directly executed automatic follow-up actions, and established which past-dues to target first. She considered available resources (staff hours, workloads) in order to take optimal actions. Encouraged by the results, Manpower ramped up their use of Aimie. Within four months, Aimie was managing nearly 60% of their single-site customers, which represents over 5,000 accounts, and nearly 10,000 follow-up actions per month. Manpower has over 700 payer centers to manage, making it impossible for a manager to call all the debtors in their portfolio. Aimie helped them decide which customers to contact first. After nine months of testing, the results were clear: with support from Aimie, effectiveness of recovery actions grew 12%. That’s a good improvement in cash collection which boosts working capital, vital for business. Sidetrade's data science team Packt: Sidetrade has a data science team, what is it and how does it function? How does your team of data engineers, data scientists work in tandem with the product teams to create AI powered B2B solutions for customers? Do they also work on customized solutions? Mark: Dr Clement Chastagnol (PhD in AI and robotics) leads our data science team. We currently have a team focused more on research topics, who really push the boundaries on some of the latest aspects of AI. However, the majority of our sata scientists work directly within product-led squads (with a mixture of different data, application & ops engineers). The reason for this is to ensure we deliver actionable AI/ML into our products on a regular basis, to ensure we are customer (therefore product) focused. As a SaaS company with 1000’s of customers/users, almost all of the work we do is to improve our overall products, and adding features that benefit the majority. This also applies to data science, although we have a very advanced M/L platform which allows us to automatically build and manage 1000’s of M/L models, that are often client-specific. In terms of research, each year we work with the French Government’s business ombudsman, to research and produce an index report of all B2B business payment disputes, including figures by industry, and length of delay. This involves our data scientists analysing over 9,000 French customer companies representing, 91% of large corporations and organizations with 250 to 5,000 employees. The data analyzed covers over 2.8 million invoices totaling €12bn. Also as part of our research work, we have received funding from the French Government, EU Commission agencies, the French national agency for research, and the DataAi Institute on the following projects: Eurofirmo, which is an index of all 26 million businesses in the EU and Britain, including headcount and revenue which has never been done before. Re-search Alps, which is a collaboration with academics from four universities that aims to track all research-active research institutes across seven European countries. It records their research projects, funding, publications, patents, and other academic output. Dirty Data: Two research axis are funded. One revolves around dirty data integration, funded by the ANR (Association National de Recherche). The other strives to develop new techniques to analyze incomplete data. It is funded by the DataAI institute. As part of this project, we’ve worked with Gaël Varoquaux (ML Researcher & creator of Scikit-learn) which has been great. Alongside all these projects, the team has recently worked with Facebook Research on the topic of data drift, as well as publishing research in journal papers, academic conference attendance and presentations, support for PhD students, hackathons and guest lectures at universities. Developing new talent in the AI space Packt: Sidetrade recently launched The Code Academy, what is it? How can developers take part in this initiative and how will it benefit them? What are other key initiatives by Sidetrade? Mark: The Code Academy is designed to develop the next generation of AI talent and is important for Sidetrade to maintain its position as a leading AI powered customer platform. The Code Academy, which was piloted in 2018, is part of Sidetrade’s commitment to providing engineering skills and jobs for young people in the Midlands, to keep the UK at the forefront of the AI industry. It’s a new, rapid approach to training and job creation. We welcome trainees with computing and non-computing backgrounds who can demonstrate ability and passion for technology. It’s rapid, as we design and deliver the academy in-house over four weeks, with a lot of support from senior developers in the team. We train for job roles, rather than just impart coding. And it’s offered without cost to the trainee, so money isn’t a barrier. At the end of four weeks the trainees are given a challenge, and asked to present their work to an audience of senior staff. Academy modules include: • Becoming familiar with Git • Setting up VSCode for .NET & Web development • How to load a relational data set through pgAdmin (CSV) • Learning how to write TSQL to analyse and find trends within a data set • Learning about the concept of & develop a basic RESTFul service • Introduction to angular (using http://angular.io/start) • Learn to connect all layers of the stack • Use Kanban (Trello) to manage projects • How to define an MVP In 2018 we trained 10 people and offered software and data engineer roles to three. In 2019, we revamped the academy, making it much more practical, and selected 12 trainees from 50 applicants. The quality of the talent was so good we offered five trainees data and software engineer roles with our professional services and R&D teams. Expanding the team Packt: You’re about to move into a new, much bigger office in central Birmingham. What are your challenges in terms of expanding the team? Can you elaborate more on the challenges faced by the team in terms of working with AIOps. Mark: That's right, we’re going to open a new Tech Hub that will house a much bigger team of data and software engineers working across the full stack. We’ll also run our 2020 Code Academy from the hub. A special launch event, Together for Tech, will make the opening official on 27th February 2020, with VIP guests, tech and business stakeholders. We’ll also be announcing a major investment in R&D and jobs creation. There is huge potential for Birmingham to become a tech powerhouse within Europe. The challenge for me is hiring enough senior level tech professionals. These people are needed to lead teams, develop staff, and keep pushing the boundary of what we can do. There’s overall a challenge to hire enough qualified professionals for the tech sector, and that is more acute at the senior levels. I think there’s a temptation for experienced tech types to head to London or even America, so it’s a challenge for the region to retain great talent. The team has spent a lot of time on ‘AI ops’, which has emerged in the past three years. So, the other challenge is how do we actually productionise all of the models and data engineering that our team and platform is producing. How do we deploy & run them in production? How do we monitor them? This is all about managing Machine Learning models at scale. For me, the sheer volume the team have to deal with is the biggest thing. We train thousands of different predictive models for different clients, and they are changing all of the time in terms of the data they are trained on. So actually, building workflows and processes to help monitor those in production at that kind of scale without having to scale the team in proportion is probably the biggest challenge. The future of AI and automation for B2B marketing and sales Packt: AI is a really broad term. It often gets used interchangeably with machine learning and deep learning. Do you think this confusion is risky or dangerous? Do you think people should simply stop talking about AI in favour of machine learning and deep learning? Mark: I think we’ve reached a point where AI has become a buzz word and a catch-all phrase. I think we can and should start being more sophisticated about what we mean, it’s a work of education. From a vendor and business point of view, AI is no longer a differentiator as everyone is talking about it, so it makes it harder to stand out. But as decision makers become more educated on the topic, it's clear which vendors have the expertise and depth of data to deliver true AI-powered solutions. Packt: What do you expect to come next in the B2B Sales and Marketing space? And how is automation of this space likely to impact other industries? Mark: My prediction for the next big thing to come in the AI space would be a major breakthrough in Quantum computing by either Google or one of the startups specializing in the field. In the B2B sales and marketing space I think the next step is just wider adoption and trust that AI can augment or even outperform humans. Most businesses will need to go through a cultural and often organisational shift that is required to get the full commercial benefits out of AI. Thanks for taking the time to speak to us Mark! We'll be watching Sidetrade closely over the months and years to come. It's also great to see such an exciting and innovative company growing in Birmingham, right near the Packt office. Learn more about Sidetrade: www.sidetrade.com
Read more
  • 0
  • 0
  • 12094

Banner background image
article-image-python-machine-learning-expert-interviews
Richard Gall
13 Mar 2018
7 min read
Save for later

Why is Python so good for AI and Machine Learning? 5 Python Experts Explain

Richard Gall
13 Mar 2018
7 min read
Python is one of the best programming languages for machine learning, quickly coming to rival R's dominance in academia and research. But why is Python so popular in the machine learning world? Why is Python good for AI? Mike Driscoll spoke to five Python experts and machine learning community figures about why the language is so popular as part of the book Python Interviews. Programming is a social activity - Python's community has acknowledged this best Glyph Lefkowitz (@glyph), founder of Twisted, a Python network programming framework, awarded The PSF’s Community Service Award in 2017 AI is a bit of a catch-all term that tends to mean whatever the most advanced areas in current computer science research are. There was a time when the basic graph-traversal stuff that we take for granted was considered AI. At that time, Lisp was the big AI language, just because it was higher-level than average and easier for researchers to do quick prototypes with. I think Python has largely replaced it in the general sense because, in addition to being similarly high-level, it has an excellent third-party library ecosystem, and a great integration story for operating system facilities. Lispers will object, so I should make it clear that I'm not making a precise statement about Python's position in a hierarchy of expressiveness, just saying that both Python and Lisp are in the same class of language, with things like garbage collection, memory safety, modules, namespaces and high-level data structures. In the more specific sense of machine learning, which is what more people mean when they say AI these days, I think there are more specific answers. The existence of NumPy and its accompanying ecosystem allows for a very research-friendly mix of high-level stuff, with very high-performance number-crunching. Machine learning is nothing if not very intense number-crunching. "...Statisticians, astronomers, biologists, and business analysts have become Python programmers and have improved the tooling." The Python community's focus on providing friendly introductions and ecosystem support to non-programmers has really increased its adoption in the sister disciplines of data science and scientific computing. Countless working statisticians, astronomers, biologists, and business analysts have become Python programmers and have improved the tooling. Programming is fundamentally a social activity and Python's community has acknowledged this more than any other language except JavaScript. Machine learning is a particularly integration-heavy discipline, in the sense that any AI/machine learning system is going to need to ingest large amounts of data from real-world sources as training data, or system input, so Python's broad library ecosystem means that it is often well-positioned to access and transform that data. Python allows users to focus on real problems Marc-Andre Lemburg (@malemburg), co-founder of The PSF and CEO of eGenix Python is very easy to understand for scientists who are often not trained in computer science. It removes many of the complexities that you have to deal with, when trying to drive the external libraries that you need to perform research. After Numeric (now NumPy) started the development, the addition of IPython Notebooks (now Jupyter Notebooks), matplotlib, and many other tools to make things even more intuitive, Python has allowed scientists to mainly think about solutions to problems and not so much about the technology needed to drive these solutions. "Python is an ideal integration language which binds technologies together with ease." As in other areas, Python is an ideal integration language, which binds technologies together with ease. Python allows users to focus on the real problems, rather than spending time on implementation details. Apart from making things easier for the user, Python also shines as an ideal glue platform for the people who develop the low-level integrations with external libraries. This is mainly due to Python being very accessible via a nice and very complete C API. Python is really easy to use for math and stats-oriented people Sebastian Raschka (@rasbt), researcher and author of Python Machine Learning I think there are two main reasons, which are very related. The first reason is that Python is super easy to read and learn. I would argue that most people working in machine learning and AI want to focus on trying out their ideas in the most convenient way possible. The focus is on research and applications, and programming is just a tool to get you there. The more comfortable a programming language is to learn, the lower the entry barrier is for more math and stats-oriented people. Python is also super readable, which helps with keeping up-to-date with the status quo in machine learning and AI, for example, when reading through code implementations of algorithms and ideas. Trying new ideas in AI and machine learning often requires implementing relatively sophisticated algorithms and the more transparent the language, the easier it is to debug. The second main reason is that while Python is a very accessible language itself, we have a lot of great libraries on top of it that make our work easier. Nobody would like to spend their time on reimplementing basic algorithms from scratch (except in the context of studying machine learning and AI). The large number of Python libraries which exist, help us to focus on more exciting things than reinventing the wheel. Python is also an excellent wrapper language for working with more efficient C/C++ implementations of algorithms and CUDA/cuDNN, which is why existing machine learning and deep learning libraries run efficiently in Python. This is also super important for working in the fields of machine learning and AI. To summarize, I would say that Python is a great language that lets researchers and practitioners focus on machine learning and AI and provides less of a distraction than other languages. Python has so many features that are attractive for scientific computing Luciano Ramalho (@ramalhoorg) technical principal at ThoughtWorks and fellow of The PSF The most important and immediate reason is that the NumPy and SciPy libraries enable projects such as scikit-learn, which is currently almost a de facto standard tool for machine learning. The reason why NumPy, SciPy, scikit-learn, and so many other libraries were created in the first place is because Python has some features that make it very attractive for scientific computing. Python has a simple and consistent syntax which makes programming more accessible to people who are not software engineers. "Python benefits from a rich ecosystem of libraries for scientific computing." Another reason is operator overloading, which enables code that is readable and concise. Then there's Python's buffer protocol (PEP 3118), which is a standard for external libraries to interoperate efficiently with Python when processing array-like data structures. Finally, Python benefits from a rich ecosystem of libraries for scientific computing, which attracts more scientists and creates a virtuous cycle. Python is good for AI because it is strict and consistent Mike Bayer (@zzzeek), Senior Software Engineer at Red Hat and creator of SQLAlchemy What we're doing in that field is developing our math and algorithms. We're putting the algorithms that we definitely want to keep and optimize into libraries such as scikit-learn. Then we're continuing to iterate and share notes on how we organize and think about the data. A high-level scripting language is ideal for AI and machine learning, because we can quickly move things around and try again. The code that we create spends most of its lines on representing the actual math and data structures, not on boilerplate. A scripting language like Python is even better, because it is strict and consistent. Everyone can understand each other's Python code much better than they could in some other language that has confusing and inconsistent programming paradigms. The availability of tools like IPython notebook has made it possible to iterate and share our math and algorithms on a whole new level. Python emphasizes the core of the work that we're trying to do and completely minimizes everything else about how we give the computer instructions, which is how it should be. Automate whatever you don't need to be thinking about. Getting Started with Python and Machine Learning 4 ways to implement feature selection in Python for machine learning Is Python edging R out in the data science wars?
Read more
  • 0
  • 1
  • 12034

article-image-romeo-kienzler-mastering-apache-spark
Amey Varangaonkar
02 Oct 2017
7 min read
Save for later

Is Apache Spark today's Hadoop?

Amey Varangaonkar
02 Oct 2017
7 min read
With businesses generating data at an enormous rate today, many Big Data processing alternatives such as Apache Hadoop, Spark, Flink, and more have emerged in the last few years. Apache Spark among them has gained a lot of popularity of late, as it offers ease of use and sophisticated analytics, and helps you process data with speed and efficiency. [author title="Romeo Kienzler" image="https://www.linkedin.com/in/romeo-kienzler-089b4557/detail/photo/"]Chief Data Scientist in the IBM Watson IoT worldwide team, has been helping clients all over the world find insights from their IoT data using Apache Spark. An Associate Professor for Artificial Intelligence at Swiss University of Applied Sciences, Berne, he is also a member of the IBM Technical Expert Council and the IBM Academy of Technology, IBM's leading brains trust.[/author] In this interview, Romeo talks about his new book on Apache Spark and Spark’s evolution from just a data processing framework to becoming a solid, all-encompassing platform for real-time processing, streaming analytics and distributed Machine Learning. Key Takeaways Apache Spark has evolved to become a full-fledged platform for real-time batch processing and stream processing. Its in-memory computing capabilities allow for efficient streaming analytics, graph processing, and machine learning. It gives you the ability to work with your data at scale, without worrying if it is structured or unstructured. Popular frameworks like H2O and DeepLearning4J are using Apache Spark as their preferred platform for distributed AI, Machine Learning, and Deep Learning. Full-length Interview As a data scientist and an assistant professor, you must have used many tools both for your work and for research? What are some key criteria one must evaluate while choosing a big data analytics solution? What are your go-to tools and where does Spark rank among them? Scalability. Make sure you can use a cluster to accelerate execution of your processes TCO – How much do I have to pay for licensing and deployment. Consider the usage of Open Source (but keep maintenance in mind). Also, consider Cloud. I’ve shifted completely away from non-scalable environments like R and python pandas. I’ve also shifted away from scala for prototyping. I’m using scala only for mission-critical applications which have to be maintained for the long term. Otherwise, I’m using python. I’m trying to completely stay on Apache Spark for everything I’m doing which is feasible since Spark supports: SQL Machine Learning DeepLearning The advantage is that everything I’m doing is scalable by definition and once I need it I can scale without changing code. What does the road to mastering Apache Spark look like? What are some things that users may not have known about Apache Spark? Can readers look forward to learning about some of them in your new book: Mastering Apache Spark, second edition? Scaling on very large clusters is still tricky with Apache Spark because at a certain point scale-out is not linear anymore. So, a lot of tweaking of the various knobs is necessary. Also, the Spark API somehow is slightly more tedious that the one of R or python Pandas – so it needs some energy to really stick with it and not to go back to “the good old R-Studio”. Next, I think the strategic shift from RDDs to DataFrames and Datasets was a disrupting but necessary step. In the book, I try to justify this step and first explain how the new API and the two related projects Tungsten and Catalyst work. Then I show how things like machine learning, streaming, and graph processing are done in the traditional, RDD based way as well as in the new DataFrames and Datasets based way. What are the top 3 data analysis challenges that never seem to go away even as time and technology keep changing? How does Spark help alleviate them? Data quality. Data is often noisy and in bad formats. The majority of the time I spend improving it through various methodologies. Apache Spark helps me to scale. SparkSQL and SparkML pipelines introduce a standardized framework for doing so. Unstructured data preparation. A lot of data is unstructured in the form of text. Apache Spark allows me to pre-process vast amount of text and create tiny mathematical representations out of it for downstream analysis. Instability on technology. Every six months there is a new hype which seems to make everything you’ve learned redundant. So, for example, there exist various scripting languages for big data. SparkSQL ensures that I can use my already acquired SQL skills now and in future. How is the latest Apache Spark 2.2.0 a significant improvement over the previous version? The most significant change, in my opinion, was labeling Structured Streaming GA and no longer as experimental. Otherwise, there have been “only” minor improvements, mainly on performance, 72 to be precise as all are documented in JIRA since it is an Apache project. The most significant improvement between version 1.6 to 2.0 was whole stage code generation in Tungsten which is also covered in this book. Streaming analytics has become mainstream. What role did Apache Spark play in leading this trend?   Actually, Apache Spark takes it to the next level by introducing the concept of continuous applications. So with Apache Spark, the streaming and batch API have been unified that you actually don’t have to care anymore on what type of data you are running your queries on. You can even mix and match. For example joining a structured stream, a relational database, a NoSQL database and a file in HDFS within a single SQL statement. Everything is possible. Mastering Apache Spark was first published back in 2015. Big data has greatly evolved since then. What does the second edition of Mastering Apache Spark offer readers today in this context? Back in 2015, Apache Spark was just another framework within the Hadoop ecosystem. Now, Apache Spark has grown to be one of the largest open source projects on this planet! Apache Spark is the new big data operating system like Hadoop was back in 2015. AI and Deep Learning are the most important trends and as explained in this book, Frameworks like H2O, DeepLearning4J and Apache SystemML are using Apache Spark as their big data operation system to scale.   I think I’ve done a very good job in taking real-life examples from my work and finding a good open data source or writing a good simulator to give hands-on experience in solving real-world problems. So in the book, you should find a recipe for all the current data science problems you find in the industry.   2015 was also the year when Apache Spark and IBM Watson chose to join hands. As the Chief data scientist for IBM Watson IoT, give us a glimpse of what this partnership is set to achieve. This partnership underpins IBM’s strong commitment to open source. Not only is IBM contributing to Apache Spark, IBM also creates new open source projects on top of it. The most prominent example is Apache SystemML which is also covered in this book. The next three years are dedicated to DeepLearning and AI. And IBM’s open source contributions will help the Apache Spark community to succeed. The most prominent example is PowerAI where IBM outperformed all state-of-the-art deep learning technologies for image recognition. For someone just starting out in the field of big data and analytics, what would your advice be?   I suggest taking a Machine Learning course of one of the leading online training vendors. Then take a Spark course (or read my book). Finally, try to do everything yourself. Participate in Kaggle competitions and try to replicate papers.
Read more
  • 0
  • 0
  • 11095
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-on-adobe-indesign-2020-graphic-designing-industry-direction-and-more-iman-ahmed-an-adobe-certified-partner-and-instructor-interview
Savia Lobo
24 Jan 2020
12 min read
Save for later

On Adobe InDesign 2020, graphic designing industry direction and more: Iman Ahmed, an Adobe Certified Partner and Instructor [Interview]

Savia Lobo
24 Jan 2020
12 min read
Gone are the days when graphic design was solely focused on the obvious graphic elements of a product like its packaging and marketing materials. Today the impact of technology and the digital revolution is huge on how we communicate, the way we work, and even the way we socialize. Graphic design is no exception to this change. Technology plays a major role in the creation of digital work available in many fields. For example, portfolio design, presentations, signage, logos, websites, animations, and even architectural production have all traveled far since the dawn of the digital revolution. This graphic designing evolution has enabled brands with greater exposure online and enabled users with engaging and interesting graphics. Recently, we had a conversation with Iman Ahmed, an Adobe Certified Partner and Instructor and CompTIA Certified Technical Trainer on the current graphic design industry, various design tools, and how is the future. Iman has 19+ years of solid international experience in delivering the skills of various applications, from Architecture, Graphic design, Infographics, Motion Graphics, Photo Editing, Magazine and book design, video production, 3D Modelling. We also discussed her recently published book, Mastering Adobe InDesign 2020, a step-by-step guide to learn InDesign Framework, Workspace, Project setup, Master pages, Pages, Text, among other core features. It also explores new features in InDesign 2020 release and how are they useful to graphic designing professionals. Adobe InDesign, a preferred choice for designing text-heavy documents Adobe’s Creative Suite of tools offers graphic designers all kinds of solutions needed to create professional and engaging graphics. From photo editing to typography tools to sound design Adobe literally has everything covered for any type of design project. So how can one compare Adobe InDesign with other tools, specifically with Illustrator and Photoshop? Iman clarifies, “Adobe applications has no relation to the user's level of experience, it is all about the purpose. Adobe Photoshop is the best application for photo editing, while Adobe Illustrator is the best used for vector design, and Adobe InDesign is created for layout design, as it has tools and options that facilitate the layout design process.” Graphic designers use InDesign when they need to layout a multi-page, text-heavy piece. For example in print or digital, InDesign is used to layout text. It is a one-stop solution for designing a magazine, brochure or a booklet. Out of the three applications, InDesign has the most robust typesetting features available. It also integrates with Adobe Digital Publishing Solution, allowing designers to create fully interactive e-books, magazines, and other digital publications. Key features in Adobe InDesign 2020 version Adobe released InDesign 2020 version in November 4th this year. This release brings significant upgrades and changes as per Adobe InDesign user requests. With this release, InDesign tool now supports SVG file formats. Graphic designers will be able to use infinitely customizable fonts or variable fonts within InDesign. There is a more efficient way to place lines, or “rules,”  between columns of text. Additionally, it includes improvements to InDesign’s core performance and launch times for up to 25% faster. Iman says about the new release, “Adobe InDesign fixed a lot of bugs in this release, and offered an improvement to resolve document corruption. Opening a particular file, saving and closing it has become faster in this release. And text editing is also faster than earlier releases. There are a bunch of Text features which have been added to Adobe InDesign 2020 release, such as variable fonts and column rules, a new feature in spell check, plus five new languages - Thai, Burmese, Lao, Khmer, and Sinhalaare are supported in this release.” She further discussed why InDesign is considered to be a tool of choice for Multi-page projects, and how the Master page feature is one of the key features of InDesign. She says, “Master page is one of the most powerful features of Adobe InDesign, it acts as a template, as master page will host all common features that are needed to repeat in document pages, and any changes made in the master page, will reflect into all document pages that follow this particular master page. Hence, a professional designer has to be smart enough in designing the document, to decide how many master pages to use in one document.” You can read Chapter 6 of this book, Mastering Adobe InDesign 2020, to know more about Master page feature and how to create different Master pages for your document. Iman’s education in architecture designing is foundational to her graphics designing career Iman has studied architecture design, and she mentions that her undergraduate and postgraduate education in architecture has contributed to her success as an Engineering Applications’ Instructor and as a Graphic Design Instructor and designer. Iman talks about her graphic designing journey and shares this quote from Frank Lloyd Wright, “The mother of art is architecture. Without an architecture of our own we have no soul of our own civilization.” Iman takes pride and feels lucky to study the mother of all arts, which guided her strongly in her graphic designer career. She mentions that the steps she took can be common with everyone who feels passionate about learning graphic design. Undoubtedly architecture was the cornerstone in her pathway, but gaining knowledge of tools is important too to accomplish for a good designer. She says, “My graphic design path literally started when I used Adobe Photoshop for the first time in an architecture presentation. Then I felt more interested to learn more about Adobe Photoshop and photo editing. One year later, I started my career as an Adobe Photoshop instructor, and a graphic designer for the same training center, beside my work as an architect.” “I taught myself more about Photoshop, plus a bunch of applications using offline help, as YouTube was not available. For years, I kept searching for design ideas, learn new applications, teach what I learnt and practice a lot. Honestly, this is not enough, to be a professional graphic designer, I still need more.” On her inspiration to write Mastering Adobe InDesign CC 2020 Iman says, “As a trainer, it is the most joyful moment when you help others to learn and improve. For more than 19 years, I am teaching people from the Middle East, Europe, USA, Canada, Australia, Asia and Africa. Their nationalities are different, but their enthusiasm looks the same. My dream is to spread knowledge and help more and more people to learn. So for me writing a book about Adobe InDesign was a great chance to share my knowledge and experience to a broader audience. In my book I have shared 19 years of experience, it is not only about InDesign, but also about the design process. It helps both designers and non-designers to work more efficient with Adobe InDesign tool and makes them aware of the steps before implementing the design. It covers various exercises  and examples to enhance reader's skills, and sharpens the skills of intermediate and expert users.” The growth story of the graphic design industry is far from over According to the U.S. Bureau of Labor Statistics, the graphic design field is expected to grow by 3% from 2018 to 2028, which is slower than average. More graphic designers are jumping to be freelancers as compensation is a major issue in a full time job role. Companies pay lesser salaries at a junior and intermediate level roles in the graphic design department. Hence, designers prefer freelancing as they can sell their creatives and design templates to multiple clients with less effort. Additionally, there are perks to being your own boss. For instance, you get to set your own working hours and choose your own jobs. As freelancers are in millions all over the world, it gets difficult to track their number in the statistics while calculating the industry growth rate. And it becomes an outlier in showing the actual industry growth results thus it shows slower than the average. Iman shared her thoughts which are on similar lines, she says, “Competition is aggressive everywhere, the first reason for this competition is the unemployment phenomenon that we are facing in this decade. Globalization and platforms that offer freelancing designers to hire from everywhere negatively affects the field and market. As some designers are cheap, as per their economy and currency standards.” But on the other hand, working at a firm has its own perks. The company will be responsible for maintaining your work environment, purchasing equipment and software, and building a client base. And graphic designers will be more likely to work regular hours for a predictable paycheck. On this Iman says, “If you work hard on your portfolio, and know how to network well with others, definitely you will be hired by reputed organizations.” If you're not sure of what to start with, it's always a good idea to intern at a small or medium firm and gain experience in the industry. Then get to know your work style, and choose what fits best. Graphic designers ditching corporate culture for freelancing Out of more than 250,000 graphic designers in the U.S., almost 25% are self-employed. This number is expected to rise in the coming years due to millennials ditching the corporate culture for a freelance lifestyle. On this we asked Iman about a typical graphic designer career graph and what some pathways available are. We also asked if professionals require a degree to enter this field. Iman believes that ”graphic designer career path varies, and different routes can be the right pathway to the graphic design career. Your route depends on your target, would you like to be professional in logo design, branding, web design, packaging design, book and magazine design, some of them, all of them or even more!” Further she discussed major steps that a graphic designer needs to take to start their journey. Step 1: Learn graphic designing Iman recommends learning graphic design in a school which is specialized in graphic designing. She says, “learning applications is not enough, applications are only tools that will help you to finish your work in a smart and easy way.  But without design fundamentals and concrete foundations, a good design will not be achieved.” Step 2: Get inspired by others “Don't Reinvent The Wheel, learn from people with experience to save time,” says Iman. She suggests, “to watch others’ work, about the latest styles of design which will inspire and be another source of knowledge to learn from.” Step 3: Practice, practice, practice! Practice makes one perfect! Iman advises to “create more than one idea for a design and find every possible way to create a design that forms the message you need to communicate. You must sketch, re-sketch, refine your sketch, implement it and edit it, for an exclusively perfect design.” Step 4: Read more books Iman adds, “read more and more books about graphic design theory, history, elements of design, color theories and other books about designing. These books have in depth knowledge and a rich experience to develop designing skills.” Step 5: Selecting the right tool for designing Iman emphasises on being smart in selecting the tool for designing. She says, “you need to be aware, which application will help you in what task. For instance, one can create logos using Adobe Illustrator, then edit photos in Adobe Photoshop, and finally collect design elements smartly in Adobe InDesign.” On graphic designing future and what to expect next When it comes to the future of graphic design, the big thing on everyone’s mind is animation and VR. Digital media is rapidly becoming the future of graphic design. For more we asked Iman on what to expect next. She explains, “All the fields such as print media, web designs, animation and VFX are ways to form a message using visual communications’ tools, and send it to the target audience, and it depends on the field to field, there is a proper way to use, based on the purpose. Whereas in the design industry, we cannot predict what will happen tomorrow, but for sure, the competition in using visual communication tools, will always remain a major factor in the improvement of all design fields.” Get Iman’s book, Mastering Adobe InDesign 2020 today to start exploring the InDesign workspace, the different menus, and functions, along with gaining insights into planning and executing a design. You’ll also get hands-on with creating your first project, focusing on aspects such as working with text, images, and shapes. Author Bio Iman Ahmed is an Architect who loves Art and Design, she started practicing Graphic Design in 1999, and during her 20 years of experience as a graphic designer, she created a lot of designs and magazines using Adobe Photoshop, Illustrator and Indesign. Her passion for teaching was genuine and that drives her to work hard to be a special kind of trainers. She started her teaching career in 2004, she is an Adobe user since 1999 and an Adobe Certified Instructor since 2008. Iman is a CompTIA Certified Technical Trainer since 2008, and she had been interviewed by CompTIA in November 2016 as a model of a special kind of trainers, who studied and applied CompTIA ( CTT+) program in a special way that successfully polished her skills and teaching style. Iman is a classroom Instructor and an online trainer who delivers courses in the Middle East and UK. Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images
Read more
  • 0
  • 0
  • 11027

article-image-luis-weir-explains-how-apis-can-power-business-growth-interview
Packt Editorial Staff
06 Jan 2020
10 min read
Save for later

Luis Weir explains how APIs can power business growth [Interview]

Packt Editorial Staff
06 Jan 2020
10 min read
API management is a discipline that has evolved to deliver the processes and tools required to discover, design, implement, use, or operate enterprise-grade APIs. The discipline bisects two distinct communities and deserves the attention of both: developers who build APIs and business and IT leaders looking at APIs to drive growth. In Enterprise API Management, Luis Weir shows how to define the right architecture, implement the right patterns, and define the right organization model for business-driven APIs. The book explores architectural decisions, implementation patterns, and management practices for successful enterprise APIs. It also gives clear, actionable advice on choosing and executing the right API strategy in your enterprise. Let’s see what Luis has to say about API management and key principles to improve API design for enterprise organizations. What API management involves What does API management mean and involve? Luis Weir: In simple terms, it’s the discipline that aligns tools with processes and people in order to realize the value from implementing enterprise-grade APIs throughout their full cycle. By enterprise-grade, I mean APIs that comply with a minimum set of quality standards, not just in the actual API itself (e.g. use of normalize semantics, well-documented interfaces, and good user experience), but also in the engineering processes behind their delivery (e.g. CICD pipelines and robust automation at all levels, different levels of testing, and so on). Guiding principles for API design What are the some guiding principles that can improve API design? LW: First and foremost is the identification of APIs themselves. It’s not just about building an API for the sake of it and value will just come. Without adopting a process (e.g. ideation) that can help identify APIs that can truly add value, there is real risk that an API might just end up being a DOA (dead on arrival), as there might not even be a need for it. Assuming such a process has taken place and APIs that have real potential to add value have been identified, the next step is to conceptualize a design. It is at this point that disciplines such as domain-driven design can help produce such a design in a way that both business and IT people can relate to it. This design should capture things such as consuming applications, producing applications (data sources), data entities, and services involved in the concept. It should clearly and simply define the relationship between the components and define boundaries (bounded contexts) as these will be key not just in the actual implementation of the API or APIs (as it may end up being more than just one), but also in the creation of the API specifications themselves thought IDLs (e.g. an OAS file, API blueprint, GraphQL schema, .proto file in gRPC to name a few). The next and very important step for producing a good API is to follow an API design-first process. This process ensures that the API specifications and API mocks (produced from the API specifications themselves) undergo a series of validations by potential consumers of the API themselves as well as other relevant parties. The idea is to obtain as much feedback as possible through multiple iterations (or feedback-loops) to ensure that the API is fit for purpose but that it also delivers a good user experience. For more details, please refer to the API Life cycle section in my book. Testing APIs What are different API testing approaches? LW: At the very minimum, API testing should involve the following testing approaches: Interface testing Functional testing Performance testing Security testing Interface testing is used to validate that an API implementation conforms to the API specification. Functional testing is used to validate that the API delivers the functionality that it is meant to deliver and with the expected behavior. Performance testing ensures that APIs can actually handle the expected volume and scale as required. Security testing ensures that the API is not vulnerable to common threads such as those described in the OWASP top 10 projects. Other more sophisticated testing approaches may include A/B testing and Chaos testing. A/B testing dynamically tests new API features against a subset of the API audience and in a running environment (even production). Chaos testing (e.g. randomly shutting down components of the solution in production to ensure the API is resilient) should be considered as the API initiative matures. Understanding API gateways What are the key features of an API gateway? LW: There are many capabilities expected of an API gateway and these are all well described in the API exposure section in my book. However, in addition to such capabilities, which in my view are all essential, there are some key features that put modern API gateways (3rd generation) apart from more traditional ones (1st and 2nd gen). These are: Lightweight: Requires minimum disk space, CPU, and RAM to run. Hybrid: Can run on-premise, on cloud, and on multiple cloud platforms (e.g. AWS, Azure, Google, Oracle, etc). Kubernetes ready: k8s has become the most popular runtime platform for microservices. Modern APIs should be easily deployed into the K8s runtime and support many of the patterns as described in my book. Common Control Plane: If the management of APIs deployed on gateways isn’t centralized in some way, shape, or form, then allowing enterprise users to discover and (re)use already built (or being built) APIs will be extremely difficult and will lead to a lot of duplication. We’ve already seen this in the SOA days. Modern API Gateways should, therefore, be pluggable to control planes that take care of things like API lifecycle management and gateway infra management. Phone-home: This is a key feature and one that still not many modern gateways support. The ability for an API gateway to stablish the communication to the management tier via the control plane (Phone-home) using standard ports is key in hybrid architectures to avoid networking and other security constraints. Enterprise API Management, I think, provides a pretty comprehensive overview of what modern API platforms look like and how to differentiate them with more traditional ones. Common mistakes in API management What are the common mistakes people make in API management? LW: Throughout my time as an API strategist and practitioner I’ve seen many mistakes and also made some myself. The important thing is being able to recognize what they are and learnt from them. The top 3 that come to my mind: Thinking that API management is just about implementing a product or tools without having business and customer value at the epicentre of the API strategy. (Sometimes there even isn’t an API strategy.) This is perhaps the most common one, and one that happened a lot in the old SOA days…unfortunately still occurs in the modern API-led era. My book, Enterprise API Management, can be used as the guideline on how to avoid making an API management initiative less about tools, and more about business/customer value, people, and processes. Thinking that all APIs are the same and therefore treating them all the same way. In some cases, this just happens accidentally, in other cases this happens to avoid ‘layering’ APIs because ‘microservices architectures and practitioners say so’. The matter of fact is, that an API that is built specifically in support of a given mobile application will be less generic and less suited for its used outside of the ‘context’ on which it was built, as compare to an API that was built without any specific consuming application in mind (and thus is not coupled to any application lifecycle). Adopting the wrong organizational model to provide API capabilities across the enterprise. Foor example, this could be a model that centralizes all API efforts and capabilities thus becoming a bottleneck and eventually becoming slow (aka traditional IT). Modern API initiatives should think about adopting platforms models with self-service at the epicentre. In addition to the above 3, there are many common pitfalls when it comes to API architecture and design. However, to cover these I strongly recommend my talk on the 7 deadly sins of API design... https://www.youtube.com/watch?v=Sx2_etbb9JA API management and DevOps What are your thoughts about 3rd generation API management having huge impact on DevOps? LW: Succeeding in modern API management and microservices architectures requires changes beyond technology and also requires diving deep into the organization and its culture. It means moving away from traditional project-based deliveries wherein teams assemble just for the duration of a project and hand over the delivered software (e.g. an API and related services) to different support teams. Instead, move towards a product-based organization wherein teams are assemble around business capabilities and retain accountability and ownership through the entire life cycle of the product. This fundamental change of approach in delivering software means that there is no longer a split between development and operation teams, as a product team has full ownership and accountability over its product. With that said, in order to avoid (re)building these product teams and maintaining core IT capabilities from scratch (e.g. API platforms and service runtimes), a platform operating model can be adopted. This model can offer common IT capabilities, although in a decentralized, on-demand, and self-service way. And for me accomplishing the above is true DevOps. It is at this point that organizations can become more agile and can truly increase their time to markets. What were your goals and objectives in this book, and how well do you feel you achieved them? LW: When I started defining and implementing API and microservices strategies in large enterprises (many of them Fortune 500), although there was plentiful of content around to get inspiration from (much of this content referenced in my book), I had to literally go through several articles, books, videos, and others in order to conceive a top-down, business-led approach towards delivering end-to-end API and microservices strategies. When I say end to end, it doesn’t mean just defining PowerPoints and lengthy Word documents explaining how to deliver API/Microservices strategies and then just walking away. Or worst, sitting on the side with an opinion but no accountability (unfortunately, only too common in the consulting world - lots of senior consultants with strong opinions but who have little or no real practical knowledge and experience). Rather it means walking the talk, defining the strategy, and also delivering it with all of its implications. With this book, I, therefore, wanted to share to the community an approach that I created, evolved through the years, and have seen working. It’s not just theory, but a mix of theory with practice. It’s not just ideas, but ideas that I have put into practice. This book is about sharing my real-life experiences and approach in delivering API and microservices strategies. Therefore, I think (or hope) that I have accomplished my goals with this book. I felt that there is great stuff out there focused on specific things of the “end to end” but not the actual “end to end,” which is what I wanted to cover in this book. I didn’t want to be too high level or too detailed. I wanted to give something to multiple audiences, as it requires multiple audiences (technical and non-technical) working together in order to successfully deliver API management. Ultimately, the readers will be the judge, but I think I have accomplished my goals with this book. Find Enterprise API Management on the Packt store. Read the first chapter for free on Packt's subscription platform.
Read more
  • 0
  • 0
  • 10508

article-image-gabriel-baptista-on-how-to-build-high-performance-software-architecture-systems-with-c-and-net-core
Vincy Davis
11 Dec 2019
10 min read
Save for later

Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core

Vincy Davis
11 Dec 2019
10 min read
A software architecture refers to the fundamental structure of a software system that serves as a blueprint to manage the system complexity. It is also used to maintain a coordination mechanism among the various components of the software. One of the popular combinations of tools that are used for building sustainable software architecture solutions are the general-purpose C# programming language and the open-source .NET Core computer software framework. This year, C# and .Net Core brought in some exciting features to help developers design a high-performance software system. To understand how C# and .Net Core aid in building software architecture systems, we interviewed Gabriel Baptista, one of the authors of the book ‘Hands-On Software Architecture with C# 8 and .NET Core 3’. Gabriel is a Software Architect, a specialist in Azure PaaS solutions and also the co-founder of a startup for developing mobile applications. According to Gabriel, the new features in C# 8 like async streams and nullable reference types are good to detect errors quickly and maintain the high quality of code programming respectively. When asked about the comparison between Visual Studio Code and Visual Studio for C# development, Gabriel insists that the productivity offered by Visual Studio is the best choice for C#. He is also of the opinion that Microsoft developed C# has a better roadmap than Java. On the applications of a microservice architecture and how .Net and C# enable code reusability In your book, ‘Hands-On Software Architecture with C# 8 and .NET Core 3’, you have demonstrated how microservice architecture can be applied to an enterprise application like microservice logging. Apart from the use cases in your book, what other applications can microservice architecture be used for? Microservices are being applied in a bunch of scenarios, due to the facilities they bring, like enabling different programming languages in different teams for the same enterprise App. Transversal aspects of software, like the Logging that we have as an example of the book, and Security, are quite simple to think about as microservices. However, the complexity increases when you think about functional requirements, like Customer Management, Logistics, or Inventory, this is a bit confusing. There is where Domain-Driven Design will help you with, since DDD is about the construction of a unique domain model, keeping the views as separate models. This is helpful because you will be able to create a domain characterized by the language spoken by the experts, that is what we call the Bounded Context Principle of DDD. Now, think about each of these domains as a microservice. This will surely facilitate your understanding of how to organize them. You can read Chapter 5 of my book to know how to apply a microservice architecture to your enterprise application. You also say in your book that code reusability is one of the most important features in Software Architecture. How does the .NET standard help in managing and maintaining a reusable library? Also, how does C# enable code reuse? Code reuse is for sure what differs the velocity of development between two great companies. The one that reuses more certainly is faster and more profit. .NET enables you to reuse code from many platforms by defining the .NET standard as the core of a class library. With .NET Standard, you can write a class library that runs in Windows, Linux and Android, for a Desktop App, a Mobile App, and Azure Function and a Web App! This is amazing! Besides, .NET itself has many opportunities for code reuse by giving us a dozen of already done classes due to its framework. To finish is good to remember that C# is an Object-Oriented Programming Language, which enables the principles of Abstraction, Polymorphism, Inheritance, and, Encapsulation, that are really useful for code reuse. Check out Chapter 11 of my book to learn how to create reusable libraries. One of the main tasks for a developer is to choose a suitable architecture that will provide the desired functionality to the software. With the many varieties of software architectural patterns available today, how should a user approach them and choose the best one? What aspects should they look at when comparing software architectures? When you need to choose a suitable architecture for a system, my first recommendation is to start the process with a specific goal – keep it simple. The more complex your architecture, the worse the path you are going to. If you stop and think a bit about the most complex solutions we have nowadays, you will find something in common and interesting in all of them. They are made by many small simpler parts. Thanks to the cloud and the bunch of APIs we have nowadays, you can design really simple solutions focused on your business. Gabriel’s views on the latest advancements in C# 8 In its latest release, C# 8 brings features like async streams, nullable reference types, and new indices/ranges. What were you most excited about in this release and why? How do you think C# 8 will help in improving the overall quality of the delivered software? I am almost sure that NullReferenceException is one of the main reasons why C# Apps crash. Then, when it comes to improving quality, for sure nullable reference types will help a lot since null reference exceptions are not detected in compilation time. With this feature, you will be able to get the errors at this point and the theory of software development says that the earlier you get a bug solved, the better and cheaper. Next, I believe that async programming is amazing to make your apps work more seamlessly since it mimics the behavior of classical synchronous code while keeping most of the performance advantages of general parallel programming. For this reason, async streams will be a good opportunity delivered, since we will be able to get the advantages of async programming in foreach loops, enabling a push-programming in this kind of loop. For instance, we will be able to program an asynchronous data pull that will not block the client. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available with C# 8. How do you think EF Core 3.0 and EF 6.3 can take advantage of the new features in C# 8? Well, the two features that I mostly enjoyed are the ones that EF Core and EF 6.3 have implemented too: nullable reference types and async streams. Reducing bugs for not having null type reference is always good! The possibilities given by async streams together with EF Core are great. So, with them, EF will be even more powerful. Another feature that it is good to know is that now they support the connection to Cosmos DB.  Read Chapter 6 of my book to understand the interactions of data in C# using Entity Framework Core. In your opinion, is C# a better programming language than Java? Which language do you think has a better future, C# or Java? As a software architect, you need to understand that the programming languages evolve. In other words, the programming language itself is not the most important part, whereas the fundamentals are the essence of the process of building systems. Considering this approach, I cannot say that one language is better than the other. The best programming language is the one that will give you the best result in the fastest time with the team you are working with. What I could say about C# and Java is that both were, are, and are going to be incredibly important to the evolution of humanity. Right now, I consider that the C# has a better roadmap than Java. The reason why I believe it is that Microsoft is always ahead of other companies when it comes to productivity. On why Visual Studio is the best option for C# development Why do most C# developers prefer Visual Studio? Can you elaborate on how VSCode differs from the other source code editors? How difficult is it to develop C# applications using Visual Studio Code? To me, Visual Studio is the most powerful development environment we have for programming nowadays. You can write code on so many platforms and for so many different solutions with incredible debugging environment, connectivity to the cloud and facility to manage your code whatever Version Control System you decide to use. With Visual Studio you have the opportunity to start any project related to C# and even more, it gives you the possibility to debug your different projects in many ways. For instance, debugging Threads or Windows Services is not easy, but with VS we find different ways to do so, which at the end causes an acceleration of development. The best answer that I always give to someone who asks me why Visual Studio is productivity. I really don’t think C# developers prefer Visual Studio Code. VS Code is really useful if you are running a different OS than Windows or if your writing code in other programming languages like NodeJS. However, when it comes to C# development, for sure Visual Studio is more powerful. Gabriel on learning curves and best practices for beginners You are a Software Architect with experience working in diverse projects for retail and industry. How much does the role differ between industries and sectors? How does the learning curve look like for beginners to become an expert in building enterprise applications with the .NET Stack? The role itself does not change due to the different sectors. Time-to-market, performance, security, reliability, and quality are requirements that will be asked for any customer you have, no matter the size they are, no matter the sector they work for. The learning curve starts by understanding the principles of .NET and C#, that means, the Object-Oriented Principles. Any developer needs to understand the process of creating software and software engineering will give them this background. To finish, I am totally sure that a person who wants to be in the development world of the 21st century needs to understand Cloud Computing, especially PaaS – Platform as a Service. And in this world, Azure is the best one for giving the results the sectors need. Can you suggest some best practices that every developer should follow for a safe and maintainable code in C#? Yes, developers should be vigilant about the following: Never leave a catch statement blank. Do not write big methods. Methods need to have a single responsibility. Every time you are not sure if there is an already done class for the code you are working to, first try to find it. Chances are that you already have this done. No matter the number of developers you have in your team, even if your team is only you, do write code the simpler you can. Threads are great if you really know what you’re doing. So before implementing them, study the topic a lot. If you want to develop highly scalable enterprise-ready apps that meet customers’ business needs, read Gabriel’s book ‘Hands-On Software Architecture with C# 8 and .NET Core 3’. This software architecture book will give you a hands-on approach to learn various architectural methods that will help you deliver high-quality products. About the Author Gabriel Baptista is a Software Architect in the R&D department of Toledo do Brasil. He leads a team who delivers weighting solutions software to retail and industry customers. Gabriel is a specialist in Azure PaaS solutions. He is also a Professor at Salvador Arena Foundation Educational Center in their Computing Engineering College Course, where he is responsible for the disciplines of Programming Language and Software Architecture. You can find him on Linkedin. You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Microsoft announces .NET Jupyter Notebooks .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others
Read more
  • 0
  • 0
  • 10404

article-image-clean-coding-in-python-with-mariano-anaya
Expert Network
27 Jul 2021
7 min read
Save for later

Clean Coding in Python with Mariano Anaya

Expert Network
27 Jul 2021
7 min read
Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we want our project to successfully deliver features constantly at a steady and predictable pace, then having a good and maintainable code base is a must. The true nature of clean code rely on the fact that other practitioners should be able to read and maintain the code. Read the book Clean Code in Python, Second Edition, to know all about idiomatic Python, see the difference between good and bad code, and identify traits of good code and good architecture. There is no sole or strict definition of clean code. Moreover, there is probably no way of formally measuring clean code, so you cannot run a tool on a repository that will tell you how good, bad, or maintainable that code is. Sure, you can run tools such as checkers, linters, static analyzers, and so on, and those tools are of much help. They are necessary, but not sufficient. Clean code is not something a machine or script can recognize (so far) but rather something that professionals can decide. We interviewed Python expert, and bestselling author, Mariano Anaya on clean coding, importance of efficient code formatting and his recent book, Clean Code in Python, 2nd Edition. The interview in detail: To what extent can the inability to write efficient code harm/affect an organization/software?  In my experience, inefficient code can be so dangerous as to paralyze entire projects. I’ve seen services that needed to be re-written because of how unmaintainable they were. At some point, it became impossible to keep on making changes to that API, and the issues kept piling up, so it needed to be replaced by a brand-new system.  On another occasion, there was an application we knew had problems because of the way it was written, and its instability was causing frustration in customers, which permeated into the company. The buggy nature of the application wasn’t separate from the way it was written, rather it was the consequence. Customers were complaining about quality, and this shows up to which degree technical debt can harm an organization. I’ve seen this pattern several times, when the company must make the hard decision of stopping the release of new features to fix errors in the software.  I’d say that technical debt, if left untreated, can lead to very harmful results for a company.  2. What should developers keep in mind while starting out with legacy systems?  First to identify the degree of technical debt accrued. There are good software projects that have been designed correctly and their technical debt is relatively low (perhaps it’s just about updating some libraries to newer versions or moving parts of the code towards new features that weren’t available at the time it was originally written).  In the event of having a lot of technical debt, it’s important to understand what’s the most critical part that needs to be fixed. There’s certainly a part in the code, a module, or a functionality that’s responsible for most of the complaints from customers, and that’s what needs to be refactored more urgently.  It’s critical in this sense to do a proper analysis and have a plan about the improvements to make in the code, rather than jump straight to the code and start refactoring. This will help give a clearer idea of what needs to be changed, and the degree of the refactor needed. Meaning, if we’ll be fixing the code, or the situation requires a full rewrite. Generally, completely rewritten the application should be a last-resort kind of decision, although there are some obvious cases (for example, if the application was written with Python 2, then it’s clear that all the code will need to be changed).  3. What are the future advancements that you anticipate in Python?  It’s hard to know for sure what will happen with Python in years to come, but it’s interesting to see that in a similar way Python took inspiration and features from other languages, it’s now inspiring modern languages as well, but it also catches up with new features and programming models. Such was the case of all the improvements made for asynchronous programming, incorporated into the standard library.  I believe the asynchronous programming capabilities will continue to be enhanced in future releases.  I have also noticed some improvements towards trying to make Python more efficient, whether this means having a more lightweight interpreter by reducing the number of packages in the standard library, to try to solve the GIL problem. These are the kind of improvements I’m more hopeful to see.  4. What are some of the popular myths around writing clean code?   Perhaps the most common misconception is that clean code is about formatting code, or maybe even about PEP-8. In fact, relating technical debt only to code issues is another popular myth. Technical debt, is also about technology, being caught up with dependencies.  Being able to update your dependencies quickly in case there’s a security issue, it’s also a concern related to technical debt, and therefore to clean code. Things like the speed of iteration, how fast and frequently deployments can be made, the adaptability of the architecture, play an important role in the success of the project.  5. Tell us about your book, Clean Code in Python, Second Edition. What trajectory does your book follow to help its readers develop maintainable and efficient code?  The first chapter starts with an introduction to the importance of having a well-structured code base, presenting a framework for the chapters to come. This is supported by tools, and recommendations on how to setup a project for success, considering automated tools that will help us format the code, and setting up a pipeline to effectively deploy our code with good quality gates (controls, tests, different stages).  Then, the book introduces some Python-specific concepts, making strong emphasis on the particularities of Python’s syntax, and a more succinct way of writing code, taking advantage of the features the language has to offer.  There are some chapters that revisit general design ideas from software engineering, like object-oriented design and design patterns. From that point, the chapter will explore topics of software engineering in terms of how they can be implemented in Python, using the particularities of the language itself.  The idea of the book is to provide readers with the tools and concepts for them to understand what clean code means beyond any definitions given. It’s a pragmatic book; oriented towards a practitioner’s audience, so it makes special focus on how to get things done in an effective way, which often means accepting tradeoffs.  6. Does your book provide hands-on scenarios to practice the techniques it teaches? Absolutely! The book has a very pragmatic, hands-on approach. As each idea is introduced, it’s followed by examples that demonstrate how that implementation would work. Moreover, I’ve put special effort into making the examples as realistic as possible. Considering that the examples need to showcase an idea irrespective of superfluous details (that is, leaving out everything that’s not relevant to the explanation being made, and isolating the problem at hand), they’re still real-world scenarios, pieces of code any reader can relate to their daily job. There’re no made-up examples like Fibonacci-series or things anyone wouldn’t normally find on real code. Extrapolating from the examples, readers can use the code as reference to solve their problems.  To practice more, there’s a Github repository where all the code from the book lies, and it’s constantly updated. There’s also a Docker image for the entire setup of the book, with the environment already configured, that readers can use to test the code, and learn by modifying it.  About Mariano Anaya is a software engineer who spends most of his time creating software with Python and mentoring fellow programmers. Mariano's main areas of interests besides Python are software architecture, functional programming, distributed systems, and speaking at conferences. He was a speaker at Euro Python 2016 and 2017. To know more about him, you can refer to his GitHub account with the username rmariano. His speakerdeck username is rmariano.
Read more
  • 0
  • 0
  • 9537
article-image-cloud-native-go-programming
Aaron Lazar
30 Mar 2018
16 min read
Save for later

Why is Go the go-to language for cloud native development? - An interview with Mina Andrawos

Aaron Lazar
30 Mar 2018
16 min read
Golang is currently one of the fastest growing programming languages in the software industry, finding its way into almost every nook and cranny of application development. Its speed, simplicity and reliability make it the perfect choice for all kinds of developers. We recently interviewed Mina Andrawos, an experienced Go engineer and the author of the book, Cloud Native programming with Golang.  Mina explains why Go is being rapidly adopted in various development areas and by leading projects like Docker and Ethereum, how it is evolving as a language and what makes it great for cloud development. He shares expert insights into Go’s adoption for mobile development, embedded systems and the serverless web. He has also thrown light on the new directions of cloud computing and how Go makes development a piece of cake. Author’s Bio Mina Andrawos is an experienced engineer who has developed deep expertise in Go by using it personally and professionally. He has written numerous Go applications with varying degrees of complexity. Other than Go, he has skills in Java, C#, Python, and C++. He has worked with various databases and software architectures and is skilled with the agile methodology for software development. Besides software development, he has working experience of scrum mastering, sales engineering, and software product management. Key Takeaways The 3 most notable features of Go are its concurrency model that sets it apart from mainstream languages, the fairly mature standard package which covers a wide range of use cases and its ease of deployment. Go is designed to be simple and intuitive, yet reliable and robust for application development. There are currently several mature tools to write Go programs, like VSCode, Vim, Atom or Sublime text. Mina’s book Cloud native programming with Golang helps you build production level cloud native microservices and covers a wide range of important topics in the space such as types of message queues, docker containers, how to monitor microservices, perform continuous integration and much more. Go can be viewed as a hybrid between mainstream statically typed languages like Java, and popular dynamic scripting languages like Javascript. Go was built with the goal of being fully cross-platform in mind, and it can work in smaller mobile processors like ARM. Full Interview Go is one of the most popular and fast growing programming languages. What according to you, are the 3 notable features of Go? Go is a very remarkable programming language. Numerous articles were written about the advantages of the language. Trying to gather notable features in Go can actually produce enough material to fill a number of white papers. However, having said that, let’s try to squeeze three out of them: 1. Concurrency: Go’s unique concurrency features are legendary. The language offers a concurrency model that stands apart from most mainstream programming languages. Go advocates a different way of thinking about concurrency problems in modern software. In one of the articles I wrote, I have described what concurrency means in the Go language. 2.  The Standard package: Go has the advantage of being coupled with a fairly mature standard package, which covers tons of key features for building modern software. This means that once you install Go, you can build production level software that can cover a wide range of use cases from Restful web APIs to encryption software, before needing to consider any third party packages. 3.  Ease of deployment: A program written in pure Go code typically compiles to a single native binary, which basically makes deploying an application written in Go as easy as copying the application file to the destination server. In other words, there is no special software needed to run Go applications in production servers like language runtimes \ virtual machines (As an example, for Java programs, we need to install the Java runtime environment in our production servers to run our programs) . Go is also cross platform, so you can target an operating system of your choice when compiling a piece of code. You have been developing software for quite some time now. What tools do you use on a day-to-day basis? Programming is a very fun craft, and the tools we use in our development are integral to making the environment enjoyable. For me, because I work with multiple programming languages, I use different tools based on the project. My current tool of choice for the Go programming language is VSCode, combined with its Go Plugin by lukehoban.  This is just my preference however. There are lots of other tools that could be used to write Go programs. Some developers prefer Vim with all it’s popular features, while others prefer Atom or sublime text. There is also a Go plugin for the IntelliJ IDE, which I had used in the past and really liked. What kind of learning plan would you suggest to web developers who are interested in using Go as their main development tool to build Cloud Native Applications? What aspects do you feel are tricky to get past? The plan would include three steps: Get comfortable with Go. Learn the design patterns, the software tools, and the technologies of cloud native applications. Get familiar with a cloud service provider (like AWS, Azure, or Google cloud) Go is designed from the grounds up to be simple and intuitive. This makes learning Go a better and more straight forward experience compared to many other languages. For developers new to Go, one of the best resources to start learning Go, is the Go tour. Once the developer is familiar with Go, then they are ready to move to the next step of the learning plan, which is to learn the design patterns of cloud native applications, as well as the software technologies needed to build and deploy such applications. A good way to start is to check out my newly published book: Cloud Native Programming with Go. One major advantage of the book is that it not only covers the technologies and design patterns associated with cloud native applications, but it also connects these technologies and design patterns with Go, which makes it an excellent resource for Go developers looking to build cloud native software. This, in my opinion, is the trickiest aspect a software professional needs to get past to acquire the necessary skills to build cloud native applications. For the third step, the execution will depend on the cloud service provider that you or your business would like to work with. Some enterprises like to utilize their own private clouds, while others are tied to a mainstream cloud provider due to existing contracts or executive preferences. For AWS, my book should provide enough insights into  how to write Go cloud native applications, that are capable of making use of the cloud platform. In the context of all the above, how does your book, Cloud Native programming with Golang, prepare its readers to be industry ready? What are the key takeaways for readers from your title and how does your book help with the learning curve? The book was the product of great amount of research, sleepless nights, and focused effort. I am a coauthor of the book with Martin Helmich, who I enjoyed working with immensely. The book was designed from the get go to expose the reader to the practical experience needed to build production level cloud native microservices in Go, with the least amount of fat. It takes the reader into an expanding learning journey, which starts from the ten thousands foot view of cloud native microservices, then dives deep down into all the different aspects that need to work together in harmony in order to produce production level cloud native applications. It will prepare you to be industry ready by covering a wide array of topics that are vital in a production environment. Examples include: Different types of message queues found in production environments, docker containers, monitoring microservices via Prometheus, continuous integration, Restful APIs design, security and authentication, AWS Go APIs, NoSQL databases, ReactJS, and more. What makes it so special that it doesn’t shy away from covering sophisticated and diverse topics from scratch. For example, if you look at the Restful API chapter, we don’t assume that you already have knowledge of the HTTP protocol or web services design. Instead, we build the concepts with you from point zero up. The only knowledge you need before reading the book is some familiarity with the Go language. Another example is our message queues chapter, you can start reading the chapter knowing nothing about message queues, but then finish the chapter with more than enough knowledge to be very effective in utilizing message queues in your applications. The book is perfect for readers who want to begin learning how to build cloud native microservice applications. It will carry the reader from a beginner level to a point where they become capable of tackling advanced tools and design patterns in that space. You've been working with several other languages like Java, C++, C# and Python. How does Go compare to the other languages you've worked with? Go, in my opinion, could be viewed as a hybrid between mainstream statically typed languages (like Java), and popular dynamic scripting languages (like Javascript). That is because Go doesn’t require the same level of verbosity that you would need in a Java program. However it’s still a bit more verbose than an equivalent Javascript or Python implementation, luckily, Go makes up for this extra verbosity compared to dynamic languages, by delivering software that is much faster than the equivalent Python or Javascript implementation. One very hotly debated feature that is missing in Go is generics. Some people in the community believe it’s a good thing Go doesn’t have generics, while others can’t wait till Go maintainers are convinced that generics need to be added. From my personal experience, I have come across situations where it would have been nice to have generics, however it never got to the point where I couldn’t complete the task at hand. Having said that, there are some situations where you can argue that a piece of Go code might be a bit more verbose than an equivalent piece of Java code that makes use of generics. As mentioned earlier, Go’s concurrency model is different than almost all mainstream programming languages. Once you master the building blocks of Go’s concurrency model (namely, Go channels and goroutines), you can build very powerful concurrent software with relative ease. I always find writing concurrent software in Go to be a much more smooth experience for me than writing concurrent software in other languages. Also another mention from earlier was the ease of deployment. I never tire from enjoying how easy it is to deploy my Go programs to production compared to other languages. One last notable mention is the tooling. Since Go is a relatively new programming language, the tooling is not yet as fancy as what’s available for older languages like C# or Java for example. However, having said that, the Go ecosystem is maturing nicely every day, and we have more than enough tools right now to build fairly sophisticated software in Go. There is no more proof of this fact than the uprise of advanced software projects written in Go like Docker and Ethereum. You've worked with JavaScript as well. What's your take on using Go for full-stack web development / Isomorphic web development, over JS? That is a very interesting question. For people not familiar with the term ‘isomorphic web development’, it basically means using the same programming language for most of the front-end and the back-end components of the web application (combined with CSS or LESS or some other front-end styling technology). There is an important distinction to make between ‘Isomorphic web development’, and ‘full-stack web development’. You can be a full-stack web developer, while using Javascript for the front-end in addition to another language like Go or Ruby for the backend. However, if you are building an ‘Isomorphic’ web application, the idea is that you make use of one language for almost all your code, whether it’s on the front-end or the backend. I think Go enjoys being in the sweet spot where simplicity meets performance. That is because, Go comes included with out-of-the-box packages, that make web development relatively smooth. Not to mention a growing third party ecosystem, that complements the standard package and further facilitates writing web applications in the Go programming language. Having said that, Javascript was built initially for the sole purpose of front-end web pages, but then grew in scope after the Node.Js project came into existence, which made Javascript a more than capable backend language as well. So for the sake of being neutral and impartial, I would like to cover some advantages and disadvantages of using Go for web development vs Javascript. Let’s start with the disadvantages of using Go for web development compared to Javascript: Javascript is a language that could natively be used in the frontend and the backend components of web applications, this will always be an advantage of using Javascript over any other programming language, when it comes to web development. However, in case of Go, this disadvantage is countered to some extent, by the existence of GopherJS . GopherJS converts Go code to Javascript code. This means that you can write front-end code in Go, then have it converted to Javascript in order to work on the browser, which will get you very close to the isomorphic web development experience you obtain from using Javascript on the frontend combined with Node.JS on the backend. GopherJS is a very popular project, with more than 6000 stars on Github. People use it and it delivers them results. Having said that, the disadvantage of GopherJS is that it’s not native, since it converts your Go code to Javascript code, which means that when tricky issues happen, you may need to troubleshoot the auto-generated Javascript code, which is not always a fun experience, especially if your reason for using GopherJS is to avoid Javascript in the first place. Your experience will vary based on your projects, and the goals you are trying to achieve. Where do you see the future of Go's development going? What changes or improvements can the community expect in future releases? Go is growing in popularity every day. I see an immensely positive outlook for the future of development in Go. I think the sky's the limit. Go currently powers some of the most exciting projects in the industry, like Docker, Kubernetes, and Ethereum, among many others. Not only that, but Go also became integral to the operations of major players in the software industry Like Google and Uber, among many others as well. All of this richness of the user base, provides Go unprecedented opportunities for growth and adoption. Engineers and maintainers who experienced Go first hand, tend to use it in their future endeavors, further enriching the ecosystem.  The language had been fairly stable and consistent for a while now, and no substantial language changes are to be expected in the near future. So if you start learning Go now, you skills will stay relevant for a long time. Most of the improvements currently getting added to Go are more related to it’s runtime performance as well as standard package enhancements. Are there any interesting areas of implementation you've noticed Go finding its way into? Do you think the language would be best fit for any specific kind of development? One interesting area for me that Go is starting to find it’s way into is mobile development. Since Go was built with the goal of being fully cross-platform in mind, it can work in smaller mobile processors like ARM for example. This means that programs written in Go not only can work in server and desktop operating systems -like Linux, MacOS, and Windows- but they also can function in mobile environments like Android and IOS.  Having said that, it is important to mention that the ecosystem for developing Go apps on mobile devices is still young and maturing. If curious, you can check https://github.com/golang/mobile for Go’s mobile tools. There is also an interesting Go framework that is still in early development but looks extremely promising as a tool to write mobile applications in Go, you can find it here: https://gomatcha.io Regarding best fit use cases for the language, I see Go as a powerhouse for backend software development. Especially the kind of modern backend that relies on microservices and distributed architectures. The power that Go gives you in the world of the server backend is indisputable. Can you give developers 3 reasons why they should pick up your book? This book Cloud Native programming with Golang covers a diverse set of practical topics from scratch, that can help the reader build production level cloud native microservices. We did a lot of research to put all these topics together. I honestly doubt you would find another resource that would cover all those topics in one place. Example of topics covered are: Restful APIs, Secure microservices, message queues (Kafka, RabbitMQ, and AWS SQS), ReactJS, MongoDB, DynamoDB, Docker, Kubernetes, AWS, microservices monitoring with Prometheus, and continuous delivery, among others. Additionally it covers the topics in a logical top-down order, which solidifies the learning process. So we start the journey by covering the 10,000 foot view about how a cloud native architecture looks like, the design, the thinking process, the scalability, and more. From there, we take satisfying deep dives into the different aspects of cloud native applications. Towards the end of the learning journey, we don’t just leave the reader with no direction. Instead, we offer a path forward to where they should take their learning journey to the next level. Amazon recently added Lambda support for Go. What's your opinion on Serverless Go? Would it go hand in hand with Cloud Native development? It was a very exciting announcement indeed. I believe serverless support is a powerful tool in the developer’s toolbox to build cloud native applications in Go. The option to include a serverless component in your application, allows you to automate very focused triggered tasks that are not supposed to run forever. This ability helps you build better cloud native applications in the long run. Microservices, on the other hand, are better suited for tasks and operations that are expected to run continuously. If you enjoyed this interview, do head over to check out Mina’s book, Cloud Native programming with Golang.
Read more
  • 0
  • 0
  • 9403

article-image-learn-transformers-for-natural-language-processing-with-denis-rothman
Expert Network
31 Aug 2021
7 min read
Save for later

Learn Transformers for Natural Language Processing with Denis Rothman

Expert Network
31 Aug 2021
7 min read
Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN and CNN models in use today. Artificial intelligence is simply a recent form of automation, just like all other automation. AI consultants will always be necessary to implement AI. Understand transformers from a cognitive science perspective with the book Transformers for Natural Language Processing. The transformer architecture is both revolutionary and disruptive making it the hottest Algorithm in AI. It is a game-changer for Natural Language Understanding (NLU), a subset of Natural Language Processing (NLP), which has become one of the pillars of artificial intelligence in a global digital economy.​ Transformers can outperform the classical RNN and CNN models in use today. We interviewed artificial intelligence expert Denis Rothman about transformers, it's advancement in artificial intelligence & NLP, and his recent book Transformers for Natural Language Processing. What's the significance of AI language understanding in the tech world today and what role do transformers play in it? Artificial intelligence-driven language understanding is expanding exponentially. It has become the pillar of language modeling, chatbots, personal assistants, question answering, text summarizing, speech-to-text, sentiment analysis, machine translation, and more. The Transformer, introduced by Google, provides novel approaches to language understanding through a novel self-attention architecture. OpenAI offers transformer technology, and Facebook's AI Research department provides high-quality datasets. Overall, the Internet giants have made transformers available to all, as you will discover in my book. The transformer architecture is both revolutionary and disruptive. The Transformer and subsequent transformer architectures and models are revolutionary because they changed the way we think of NLP and artificial intelligence itself. The architecture of the Transformer is not an evolution. It breaks with the past, leaving RNNs and CNNs behind. It takes us closer to seamless machine intelligence that will match human intelligence in the years to come. What should deep learning & NLP practitioners keep in mind while starting their career with transformers? The world of artificial intelligence is undergoing an exponential evolution in NLP due to the amount of data available. As this evolution expands to all domains, new abilities are required. NLP will not just be about downloading a model and getting to work in terms of software. You will have to analyze the quality of what a transformer model produces to fine-tune it. In turn, to analyze NLP properly, a minimum knowledge in linguistics will become mandatory. Linguistics will enable you to understand the building blocks and structure of a language. Grammar will increase your ability to analyze the output of a transformer.  Otherwise, your team will have to hire a linguist, which will increase the project's cost and threaten the Return On Investment(ROI) of the team. What are some future advancements that you anticipate in transformers and NLP? Transformers have wiped RNNs off the map at this point. They represent the industrialization of artificial intelligence. As artificial intelligence, transformers are taking AI from the hype to an industrial level. Unlike traditional deep learning models, transformers contain optimized layers for GPUs and CPUs. In the future, creating NLP models will require machine architecture awareness. Machine performance will be the key to more efficient models. Not everybody can purchase or rent a supercomputer to train a model. Learning how to design tailored transformer models based on optimized datasets will become mandatory to face competition. What are some of the popular myths around transformers prevalent in the tech market? Many people believe that transformers can perform all NLP tasks with a model such as GPT-3. Nothing can be further from the truth. Google, Microsoft, Facebook, and Amazon, for example, need data for their everyday business and powerful NLP transformer models to analyze the billions of words coming in every day. However, the tasks are limited to their marketing usage. If you need to implement a transformer in a specific area, you will have to build datasets. You will also have to build pipelines with classical algorithms and queries to process the data, the inputs, and manage the outputs. In real-life, that means that artificial intelligence is only a component in a long chain of classical algorithms and processes. How was your experience building one of the very first word2matrix embedding solutions? In the early 1980s, I managed a company with many students who wanted to learn a language. I had a choice. Increase the number of teachers or automate vast portions of the process. I decided to go for automation. Any intelligent system requires calculations. I found that converting words and word pieces into numbers was far more efficient than directly analyzing the words. I thus create a word2vector system, patented it in 1982, wrote a textbook, and implemented it in our company. Students began to take specific courses independently in our lab without a teacher. I then went further in the next few years, writing one of the first Cognitive NLP Chatbots with was successfully implemented for an industrial amount of students. Being the author of three cutting-edge AI solutions, what is your take on the shrinkage of job opportunities due to AI? Automation began centuries ago with water mills, windmills, textile machines, locomotives, and more recently, motorized personal vehicles in the early 20th century. Tractors replaced millions of jobs in the fields. Services are no exception. In the 1950s, hundreds of thousands of tellers, actual humans, worked in banks around the world. Today everybody goes to an ATM. ATM stands for Automated Teller Machine(ATM). “Automated teller,” says it all. A person performing a service was automated. Software is the automation of human tasks from the beginning, from accounting to stock market management and thousands of tasks. Artificial intelligence is simply a recent form of automation, just like all other automation. AI cannot replace traditional mathematics in physics. The calculation of differential equations driving rockets and satellites requires classical software precision, not artificial intelligence. AI is only a component of automation, like when cars replaced horses and all of the jobs that went with horse-driven transportation. AI will not replace everything because AI is useless in many fields. AI consultants will always be necessary to implement AI. Why has Python become the most suitable language for natural language processing? It’s important not to confuse the concepts of “most used” and “most suitable.” Python is a great intuitive language to learn AI and NLP. But it’s not a prerequisite. Python is easy to use and run, making it the shortest path, at this point, to take to learn AI. But do not be mistaken. C++ skills will also be required in large real-life projects, for example. My advice. Learn AI with Python at full speed. Do some implementations with Python. But learn other languages such as C++, Java, and more. Real-life pipelines require classical processes and algorithms, not only AI. In some projects, C++ will boost performances, for example. Tell us about your book, Transformers for Natural Language Processing. What trajectory does your book follow to help its readers master transformers? Reading my book on transformers will help you save weeks and maybe months of effort trying to understand how they work by watching videos and reading blogs. The reader will begin by learning the original Transformer in depth. Once the transformer's building blocks are mastered, the reader will learn how to train and fine-tune a transformer. The reader will then build and run the main transformer models such as BERT, RoBERTa, GPT-2, T5, and more. The models will be applied to NLP tasks such as document summarization, Q&As, semantic analysis, and a wide range of NLP tasks. The book contains a method to analyze fake news with transformers. The book also goes beyond the architecture of transformers and into the world of usage. You will learn how to build, train, fine-tune, and implement transformers.
Read more
  • 0
  • 0
  • 8251

article-image-glen-singh-on-why-kali-linux-is-an-arsenal-for-any-cybersecurity-professional-interview
Savia Lobo
15 Nov 2019
12 min read
Save for later

Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview]

Savia Lobo
15 Nov 2019
12 min read
Kali Linux is a popular term for anyone related to computer security. It is the most renowned tool for advanced Penetration Testing, Ethical Hacking and network security assessments.  To know more about Kali Linux more closely, we recently had a quick chat with Glen D. Singh, a cyber security instructor and an Infosec author with Learn Kali Linux 2019 being his latest book. In his book, Glen explains how Kali Linux can be used to detect vulnerabilities and secure your system by applying penetration testing techniques of varying complexity. Talking to us about Kali Linux, Glen said that the inclusion of 300 pre-installed tools makes Kali Linux an arsenal for any cybersecurity professional. In addition to talking about certification options for both novice and experienced cybersecurity professionals, Glen also shared his favorite features from the latest Kali Linux version 2019.3 among other things in this deeply informative discussion. On why the cybersecurity community loves Kali Linux and what’s new in Kali Linux 2019.3 What makes Kali Linux one of the most popular tools for penetration testing as well as for digital forensics? The Kali Linux operating system has over 300 pre-installed tools for both penetration testing and digital forensics engagements, making its single operating system an arsenal for any cybersecurity professional.  The developers of Kali Linux are continuously working to create rolling updates, new features and new upgrades to the existing operating system. Today, you can even deploy Kali Linux on various cloud platforms such as Microsoft Azure, Amazon AWS and Digital Ocean. This allows you to create a beast of a machine with any scale of computing resources, while allowing you access from anywhere. Furthermore, being a Linux-based operating system is one of the best things that makes Kali Linux popular. This is because Linux is a very powerful operating system with already built-in security, rolling updates, and security fixes, and is very light on computing resources as compared to other operating systems. Kali Linux can even be installed on a Raspberry Pi, making it a custom network implant device. Finally, what I love about Kali Linux is the fact that you can create a live USB with multiple persistence stores and apply the Linux Unified Key Setup (LUKS) Encryption Nuke, providing the options to wipe the stores using a Nuke password. What are the features that excited you in the latest Kali Linux version, 2019.3, and why? According to you, how will these additions help Kali Linux grow as a community and for individuals using it? One feature I’m definitely excited about in Kali Linux 2019.3 is the support for LXD Container Image. This feature will allow you to experience virtual machines on Kali Linux but instead of using a hypervisor, you’ll be using Linux containers instead. This provides some major benefits such as easy to scale containers, support for networking and storage management with security. Kali Linux 2019.3 has support for the new Raspberry Pi 4, which has an improved CPU and faster memory as compared to its predecessor. With the new upgrades to Kali Linux 2019.3, the pentesting operating system can take advantage of the 64-bit CPU on the new Raspberry Pi 4, thus maximizing the computing power in the tiny ARM device.  Definitely I can see cybersecurity enthusiasts having a lot more fun creating Linux-based containers in their Kali Linux 2019.3 version. Many will be excited to purchase a credit-sized computer, the Raspberry Pi, 4 for setting network implants and remote access configurations that are ready to be deployed. Glen’s journey in the cybersecurity sector and a few certification recommendations for a career upgrade Tell us about your evolution in cybersecurity. As a teenager, I was always fascinated by computers and how technologies work together. Upon completing my secondary level education, I began to pursue my first IT certification, this was CompTIA A+. During this certification, I was introduced to computer security and this had caught my attention a bit more than other topics. Later on, I pursued the CompTIA Network+ certification and this where network security caught my attention. Of course, I’m sure you can guess the next course of pursuit, the CompTIA Security+. This certification was the one which helped me realize my love for IT Security was growing and this is what I want to pursue as a career. After completing my studies in CompTIA Security+, I realized that I had to make a big decision in choosing the specialization. The decision was a bit tough at the time, I decided to enroll for the Certified Ethical Hacker (CEH) programme. This was it for me, my first major certification in IT, my love for cybersecurity grew even more as I wanted to specialize in offensive security tactics next.  There onwards, I have continued to harness my skills in discovering vulnerabilities and learning about new hacking techniques. I had often wondered to myself at the time - If I can hack, surely there must be methods a digital forensics professional can use to find the malicious user. I decided to pursue the Computer Hacking Forensics Investigator (CHFI) certification as a natural progression in my journey to understand everything there is about cybersecurity. This has taught many things about operating systems, network and email forensics and so on.  Additionally, I did a couple of firewall certifications and training such as CCNA Security, Check Point CCSA and Fortinet as I wanted to learn more about how firewalls operate to protect organizations and improve network security. During this time, I was working in an administrative position, however my certifications allowed me to gain employment within the IT industry as a security professional at various companies. However, growth was a bit challenging in some of my past positions while my pursuit to continuously expand my knowledge was growing. Eventually, I began lecturing Cisco certification programmes and gradually took over cybersecurity certifications and training programmes at various institutions. This opportunity allowed me to grow a lot while working with others, develop secure network designs and strategies, develop training programmes, train persons in both private and public sectors ranging from ISPs to government agencies in the field of cybersecurity.  In 2018, Packt Publishing had reached out to me to be a Technical Reviewer for the book, Penetration Testing with Shellcode. After this project was completed, Packt had reached out once again in the same year, this time to be the Lead Author for the CCNA Security 210-260 Certification Guide and before 2018 was completed, I had my second book CompTIA Network+ Certification Guide published. In early 2019, my third title Hands-On Penetration Testing with Kali NetHunter was also published. Finally, in November 2019, my fourth book Learn Kali Linux 2019 is now published.  Currently, I work as a Cybersecurity Instructor delivering training in offensive security, network security and enterprise networking. Additionally, I share my knowledge and guidance with others through various social media platforms, provide mentoring for anyone in the community within ICT, occasionally delivering speeches on cybersecurity awareness.  Following my dreams is what has led me to my career in cybersecurity, where I can help so many people in a lot of different ways, to secure their organizations or even safeguard their families from cyber-attacks and threats. I honestly love what I do, so I don't see it as “work” but my passion. Given the pace of change in tech and evolving threats, what role do certifications play, if any? What must-have certifications do you recommend for those starting their cybersecurity career and for those looking for a career boost?  Certifications will always play a vital role in the cybersecurity industry in both the present and future as technologies and threats evolve. Being a certified professional in the industry’s latest certifications helps with growth in your career. It also proves you have the necessary skills required for a job role and helps you specialize in technologies making you stand out from the rest of the crowd. Whether you’re starting a career in cybersecurity or simply looking for a career boost, there are some must-have certifications I would definitely recommend. If you’re new to the field of cybersecurity I would personally recommend starting with a networking certification such as Cisco Certified Network Associate (CCNA) as it will help you develop a solid foundation in understanding the functions of networking components and protocols, composition of network traffic as it’s passed along multiple networks, and how devices are interconnected and communicate. Networking knowledge will help you understand how cyber-attacks are delivered through the internet and corporate networks. Secondly, I would recommend both the Certified Ethical Hacker (CEH) certification from EC-Council and Offensive Security Certified Professional (OSCP) certification from Offensive Security. The CEH contains a lot of valuable information and will help you get through the doors of Human Resource (HR) and various national security agencies, however the OSCP is currently in higher demand in the cybersecurity industry due to its intensive hands-on training and practical testing, thus simulating a real-world penetration test. Additionally, if finances are a bit challenging in one’s life, take a look at the Junior Penetration Tester (eJPT) and the Certified Professional Penetration Tester v2 (eCPPTv2) from eLearnSecurity. Before choosing a cybersecurity certification to enroll, take a thorough look at the module each certification has to offer and ensure each new certification you decide to pursue either teaches you something new or expands your existing knowledge and skill-set as a professional. Last by not least, learn some Linux. On navigating the cybersecurity landscape by Learning Kali Linux How does your book, Learn Kali Linux 2019, help readers navigate the cybersecurity landscape in 2019? Are there any prerequisites? What are the top 5 key takeaways from your book? As each day goes by, new threats emerge while most are undetectable for long periods of time. My book, Learn Kali Linux 2019 is designed not only to teach you the role of being a penetration tester but also to help develop your mindset to be strategic when searching for security vulnerabilities that a hacker can exploit.  There are no formal prerequisites for this book, however, for anyone who is interested in pursuing their studies or a career in the cybersecurity industry, I would definitely recommend having a solid foundation in networking.  The top 5 key takeaways from my book are: Learn how to perform penetration testing starting from scratch while gradually moving on to intermediate and advanced topics while maintaining a student-centric approach for all learners. Upon completing this book, you will also gain essential skills in learning and understanding the Linux operating system. You will learn how to perform various stages of penetration testing using a very practical and real-world approach. Beginning a career in cybersecurity, you will learn how to design and build your very own penetration testing virtual lab environment, where you can sharpen your hacking skills safely. On completing this book, you will have the essential hands-on experience and knowledge to start a career in the field of cybersecurity. On Kali Linux’s future scope and applications   Recently, Kali Linux has been made available for the compact computer board, Raspberry Pi 4. How do you see Kali Linux’s evolution over time? Is IoT the new frontier for cybersecurity professionals and hackers alike? Where else do you see Kali Linux adapting to in the coming years?  Since its initial release in 2012, the Kali Linux operating system has had a lot of major upgrades, thus creating an awesome operating system simply built for penetration testing and security auditing for the IT professional. Currently, Kali Linux can be installed on mobile devices such as smartphones and tablets by using the Kali Nethunter edition and even installed on micro-computing devices with ARM processors such as the Raspberry Pi 4. Definitely, over the coming years, I can foresee that newer editions of Kali Linux will be supported on next-generation computing devices. The rise of IoT devices and networks, also brings about security concerns to both the home and corporate users. Imagine there are hundreds and thousands of IoT devices out there that are connected to the internet but they do not have any form of cyber protection. Imagine the possibilities of a hacker exploiting a security weakness on a medical device, or even a smart security system for homes, the hacker can monitor a person’s actions and much more. IoT can both make our lives easier but at the same time, open new doorways to cyber criminals. Definitely as time goes by, Kali Linux will continuously evolve and improve to fit the need for any cybersecurity professional.  In the coming updates, what additional features do you wish to see in Kali Linux? In the upcoming updates, I really wish to see better support and improvements for the Kali Nethunter edition for both current and future devices. Nethunter allows a cybersecurity professional to perform penetration testing tasks using their Android-based smartphone or tablet. Having Nethunter available on a pocket device provides the convenience when you are on-the-go. About the Author Glen D. Singh is a cyber-security instructor, consultant, entrepreneur and public speaker. He has been conducting multiple training exercises in offensive security, digital forensics, network security, enterprise networking and IT service management annually.  He also holds various information security certifications, such as the EC-Council's Certified Ethical Hacker (CEH), Computer Hacking Forensic Investigator (CHFI), Cisco's CCNA Security, CCNA Routing and Switching, and many others in the field of network security. Glen has been recognized for his passion and expertise by both the private and public sector organizations of Trinidad and Tobago and internationally. About the Book Simply upgrade your Kali Linux whereabouts with Learn Kali Linux 2019, which will help you understand how important it has become to pentest your environment, to ensure endpoint protection.  This book will take you through the latest version of Kali Linux to efficiently deal with various crucial security aspects such as confidentiality, integrity, access control and authentication. Kali Linux 2019.1 released with support for Metasploit 5.0 Implementing Web application vulnerability scanners with Kali Linux [Tutorial] Kali Linux 2018 for testing and maintaining Windows security – Wolf Halton and Bo Weaver [Interview]
Read more
  • 0
  • 0
  • 6720
article-image-deep-learning-is-not-an-optimum-solution-for-every-problem-faced-an-interview-with-valentino-zocca
Sunith Shetty
14 Nov 2018
11 min read
Save for later

“Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca

Sunith Shetty
14 Nov 2018
11 min read
Over the past few years, we have seen some advanced technologies in artificial intelligence shaping human life. Deep learning (DL) has become the main driving force in bringing new innovations in almost every industry. We are sure to continue to see DL everywhere. Most of the companies including startups are already integrating deep learning into their own day-to-day process. Deep learning techniques and algorithms have made building advanced neural networks practically feasible, thanks to high-level open source libraries such as TensorFlow, Keras, PyTorch and more. We recently interviewed Valentino Zocca, a deep learning expert and the author of the book, Python Deep Learning. Valentino explains why deep learning is getting so much hype, and what's the roadmap ahead in terms of new technologies and libraries. He will also talks about how major vendors and tech-savvy startups adopt deep learning within their organization. Being a consultant and an active developer he is expecting a better approach than back-propagation for carrying out various deep learning tasks. Author’s Bio Valentino Zocca graduated with a Ph.D. in mathematics from the University of Maryland, USA, with a dissertation in symplectic geometry, after having graduated with a laurel in mathematics from the University of Rome. He spent a semester at the University of Warwick. After a post-doc in Paris, Valentino started working on high-tech projects in the Washington, D.C. area and played a central role in the design, development, and realization of an advanced stereo 3D Earth visualization software with head tracking at Autometric, a company later bought by Boeing. At Boeing, he developed many mathematical algorithms and predictive models, and using Hadoop, he has also automated several satellite-imagery visualization programs. He has since become an expert on machine learning and deep learning and has worked at the U.S. Census Bureau and as an independent consultant both in the US and in Italy. He has also held seminars on the subject of machine learning and deep learning in Milan and New York. Currently, Valentino lives in New York and works as an independent consultant to a large financial company, where he develops econometric models and uses machine learning and deep learning to create predictive models. But he often travels back to Rome and Milan to visit his family and friends. Key Takeaways Deep learning is one of the most adopted techniques used in image and speech recognition and anomaly detection research and development areas. Deep learning is not the optimum solution for every problem faced. Based on the complexity of the challenge, the neural network building can be tricky. Open-source tools will continue to be in the race when compared to enterprise software. More and more features are expected to improve on providing efficient and powerful deep learning solutions. Deep learning is used as a tool rather than a solution across organizations. The tool usage can differ based on the problem faced. Emerging specialized chips expected to bring more developments in deep learning to mobile, IoT and security domain. Valentino Zocca states We have a quantity vs. quality problem. We will be requiring better paradigms and approaches in the future which can be improved through research driven innovative solutions instead of relying on hardware solutions. We can make faster machines, but our goal is really to make more intelligent machines for performing accelerated deep learning and distributed training. Full Interview Deep learning is as much infamous as it is famous in the machine learning community with camps supporting and opposing the use of DL passionately. Where do you fall on this spectrum? If you were given a chance to convince the rival camp with 5-10 points on your stand about DL, what would your pitch be like? The reality is that Deep Learning techniques have their own advantages and disadvantages. The areas where Deep Learning clearly outperforms most other machine learning techniques are in image and speech recognition and anomaly detection. One of the reasons why Deep Learning does so much better is that these problems can be decomposed into a hierarchical set of increasingly complex structures, and, in multi-layer neural nets, each layer learns these structures at different levels of complexity. For example, an image recognition, the first layers will learn about the lines and edges in the image. The subsequent layers will learn how these lines and edges get together to form more complex shapes, like the eyes of an animal, and finally the last layers will learn how these more complex shapes form the final image. However, not every problem can suitably be decomposed using this hierarchical approach. Another issue with Deep Learning is that it is not yet completely understood how it works, and some areas, for example, banking, that are heavily regulated, may not be able to easily justify their predictions. Finally, many neural nets may require a heavier computational load than other classical machine learning techniques. Therefore, the reality is that one still needs a proficient machine learning expert who deeply understands the functioning of each approach and can make the best decision depending on each problem. Deep Learning is not, at the moment, a complete solution to any problem, and, in general, there can be no definite side to pick, and it really depends on the problem at hand. Deep learning can conquer tough challenges, no doubt. However, there are many common myths and realities around deep learning. Would you like to give your supporting reasoning on whether the following statements are myth or fact? You need to be a machine learning expert or a math geek to build deep learning models We need powerful hardware resources to use deep learning Deep learning models are always learning, they improve with new data automagically Deep learning is a black box, so we should avoid using it in production environments or in real-world applications. Deep learning is doomed to fail. It will be replaced eventually by data sparse, resource economic learning methods like meta-learning or reinforcement learning. Deep learning is going to be central to the progress of AGI (artificial general intelligence) research Deep Learning has become almost a buzzword, therefore a lot of people are talking about it, sometimes misunderstanding how it works. People hear the word DL together with "it beats the best player at go", "it can recognize things better than humans" etc., and people think that deep learning is a mature technology that can solve any problem. In actuality, deep learning is a mature technology only for some specific problems, you do not solve everything with deep learning and yet at times, whatever the problem, I hear people asking me "can't you use deep learning for it?" The truth is that we have lots of libraries ready to use for deep learning. For example, you don’t need to be a machine learning expert or a math geek to build simple deep learning models for run-of-the-mill problems, but in order to solve for some of the challenges that less common issues may present, a good understanding of how a neural network works may indeed be very helpful. Like everything, you can find a grain of truth in each of those statements, but they should not be taken at face value. With MLaaS being provided by many vendors from Google to AWS to Microsoft, deep learning is gaining widespread adoption not just within large organizations but also by data-savvy startups. How do you view this trend? More specifically, is deep learning being used differently by these two types of organizations? If so, what could be some key reasons? Deep Learning is not a monolithic approach. We have different types of networks, ANNs, CNNs, LSTMs, RNNs, etc. Honestly, it makes little sense to ask if DL is being used differently by different organizations. Deep Learning is a tool, not a solution, and like all tools it should be used differently depending on the problem at hand, not depending on who is using it. There are many open source tools and enterprise software (especially the ones which claim you don't need to code much) in the race. Do you think this can be the future where more and more people will opt for ready-to-use (MLaaS) enterprise backed cognitive tools like IBM Watson rather than open-source tools? This holds true for everything. At the beginning of the internet, people would write their own HTML code for their web pages, now we use tools who do most of the work for us. But if we want something to stand-out we need a professional designer. The more a technology matures, the more ready-to-use tools will be available, but that does not mean that we will never need professional experts to improve on those tools and provide specialized solutions. Deep learning is now making inroads to mobile, IoT and security domain as well. What makes DL great for these areas? What are some challenges you see while applying DL in these new domains? I do not have much experience with DL in mobiles, but that is clearly a direction that is becoming increasingly important. I believe we can address these new domains by building specialized chips. Deep learning is a deeply researched topic within machine learning and AI communities. Every year brings us new techniques from neural nets to GANs, to capsule networks that then get widely adopted both in research and in real-world applications. What are some cutting-edge techniques you foresee getting public attention in deep learning in 2018 and in the near future? And why? I am not sure we will see anything new in 2018, but I am a big supporter of the idea that we need a better paradigm that can excel more at inductive reasoning rather than just deductive reasoning. At the end of last year, even DL pioneer Geoff Hinton admitted that we need a better approach than back-propagation, however, I doubt we will see anything new coming out this year, it will take some time. We keep hearing noteworthy developments in AI and deep learning by DeepMind and OpenAI. Do you think they have the required armory to revolutionize how deep learning is performed? What are some key challenges for such deep learning innovators? As I mentioned before, we need a better paradigm, but what this paradigm is, nobody knows. Gary Marcus is a strong proponent of introducing more structure in our networks, and I do concur with him, however, it is not easy to define what that should be. Many people want to use the brain as a model, but computers are not biological structures, and if we had tried to build airplanes by mimicking how a bird flies we would not have gone very far. I think we need a clean break and a new approach, I do not think we can go very far by simply refining and improving what we have. Improvement in processing capabilities and the availability of custom hardware have propelled deep learning into production-ready environments in recent years. Can we expect more chips and other hardware improvements in the coming years for GPU accelerated deep learning and distributed training? What other supporting factors will facilitate the growth of deep learning? Once again, foreseeing the future is not easy, however, as these questions are related, I think only so much can be gained by improving chips and GPUs. We have a quantity vs. quality problem. We can improve quantity (of speed, memory, etc.) through hardware improvements, but the real problem is that we need a real quality improvement, better paradigms, and approaches, that needs to be achieved through research and not with hardware solutions. We can make faster machines, but our goal is really to make more intelligent machines. A child can learn by seeing just a few examples, we should be able to create an approach that allows a machine to also learn from few examples, not by cramming millions of examples in a short time. Would you like to add anything more to our readers? Deep Learning is a fascinating discipline, and I would encourage anyone who wanted to learn more about it to approach it as a research project, without underestimating his or her own creativity and intuition. We need new ideas. If you found this interview to be interesting, make sure you check out other insightful interviews on a range of topics: Blockchain can solve tech’s trust issues – Imran Bashir “Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan “Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou
Read more
  • 0
  • 0
  • 6667

article-image-fastly-cto-tyler-mcmullen-on-lucet-and-the-future-of-webassembly-and-rust-interview
Bhagyashree R
09 Jul 2019
11 min read
Save for later

Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview]

Bhagyashree R
09 Jul 2019
11 min read
Around this time in 2015, W3C introduced WebAssembly, a small binary format that promises to bring near-native performance to the web. Since then it has been well received by web developers, with some going as far as to say that the "death of JavaScript is near." It is also supported in all the major browsers including Mozilla, Chrome, Safari, and Edge. While WebAssembly was initially designed with the web in mind, it would be a waste not to take its performance and security benefits to go “beyond the web” environments as well. This year we are seeing many initiatives pushing WebAssembly beyond the web. One of them is by Fastly, an edge cloud platform provider. Beginning this year, Fastly open sourced its WebAssembly compiler and runtime, named Lucet. With Lucet, Fastly’s edge cloud can execute tens of thousands of WebAssembly programs simultaneously. We had a great opportunity to interview Fastly’s CTO Tyler McMullen, who gave us insight into why and how they came up with Lucet, what sets it apart from other WebAssembly compilers, the inner workings and design decisions behind Lucet, and more.   Here are some of the highlights from the interview: Benefits of WebAssembly beyond the Web It is exciting to think that we will be able to experience near-native experience on the web. But WebAssembly also aims to solve another major concern of today’s times: security. “WebAssembly was designed for performance, and also for security. WebAssembly programs carry much stronger security guarantees than native code, with comparable performance. That makes it a great candidate for the edge cloud, where we can use the Lucet compiler and runtime to execute WebAssembly programs in isolation from each other, at a much lower resource and performance cost than competing approaches to multi-tenant isolation of native code, like processes, containers, or virtual machines.” Along with these security and performance benefits, the growing support for WebAssembly by compilers like LLVM (since its version 8 release) also makes it suitable for non-web environments. McMullen adds, “Besides security, the other aspect that makes WebAssembly attractive beyond the browser is maturing support by compilers, most notably the LLVM toolchain, used by the Clang C compiler and Rust language compiler, among others. Rather than having to build a new language, or a new compiler, to emit code with the security guarantees we need, we can use the WebAssembly output of any compiler. And it means that tons of existing programs can be compiled to WebAssembly with minimal modification.” How Lucet ensures security With security being one of the major focus areas of Lucet, we asked McMullen how security in Lucet works. “WebAssembly provides a set of guarantees about the security and safety of the code that can be verified during compilation. But those guarantees only hold if verification and compilation are done correctly. Those guarantees also require the runtime to cooperate. So there are a lot of moving pieces here that need to work in concert with each other. Lucet takes a security-by-contract approach to this problem. The compilation phase builds up a set of constraints for the runtime. Those constraints get embedded into the compiled artifact. The runtime then picks up those constraints and enforces them while loading and running the module. This lets us enforce things like which functions a module will be allowed to import for the embedding program, how much memory it will attempt to use, as well as the layout of that memory. So, the security guarantees that Lucet provides end up being enforced with a combination of the compiler, runtime, and the embedding program.” Compilation in Lucet Lucet is designed to compile a code written in C/Rust to WebAssembly and then compile this to native. So, why can’t we directly compile code written in C/Rust to native code? McMullen says that this will give you control over the behavior of the generated code. “If you used a typical C or Rust compiler you’d have relatively little in the way of guarantees about the behavior of the generated code. With Rust you’d have a bit more in that you could guarantee memory safety, but that’s not sufficient by itself. On the other hand, we could certainly create a new C or Rust compiler that guaranteed all the safety guarantees we’ve already discussed, but that would be a tremendous amount of work and would require still more work for each language you wanted to safely compile. We chose WebAssembly because it provides many of the safety and performance guarantees we’re looking for and -- just as importantly -- also has community support. Rather than reinventing the wheel over and over again, we as a community can work together toward a common goal.” Lucet is still in its early stages of development. McMullen shares what the Lucet team is up to now: “Prior to open sourcing Lucet, we focused on WebAssembly programs emitted by a couple of compilers - LLVM via Clang and Rustc, and AssemblyScript. Supporting that subset of WebAssembly was sufficient to launch Terrarium late last year, where users can create complex web services that are compiled and deployed on demand. Since the Lucet announcement, we’ve seen interest and contributions from other languages, including Swift, Golang, Zig, and Wam. We’ve fixed a bunch of the spec compliance issues that blocked these users, and are actively working on fixing the remaining ones now.” To support, or not to support JavaScript, that is the question While building WebAssembly runtimes today, developers have two paths to choose from: either supporting JavaScript or not. Lucet follows the latter one, which helps it be simple yet performant. "Security and resource consumption also drove our design here. Modern, fast JavaScript engines are quite complex, require lots of RAM, startup time, and -- in order to make them fast -- highly advanced JIT compilers. These requirements run counter to what Fastly does. By dropping JavaScript, we can dramatically reduce the complexity and increase the performance of our system. To be clear, reducing complexity isn’t just about making life easier on ourselves. By cutting out the massive complexity of JavaScript we can also reduce the attack surface and increase confidence in our safety guarantees." In the myriad of WebAssembly runtimes, what sets Lucet apart There are currently quite a few WebAssembly runtimes, for instance, Nebulet, Wasmjit, Life, including the ones very similar to Lucet like Wasmer and Wasmtime. We were curious to know what differences Lucet brings to the table. “Lucet was designed from the ground up for multi-tenant, highly concurrent use cases, which matches the runtime requirements of Fastly’s edge cloud. The major design decisions that differentiate it are all focused on performance and resource consumption in our use case, where we need to launch WebAssembly instances for each request our edge cloud handles. Adam Foltzer, a senior software engineer at Fastly, wrote a detailed post on our design and benchmarked its performance here. Lucet shares a major component with the Wasmtime runtime, the Cranelift code generation engine. Wasmtime is currently designed for a single-tenant use case, and supports in-process compilation of WebAssembly, often called JIT. We are collaborating with the maintainers of Wasmtime on Cranelift, and on runtime implementations of the WebAssembly System Interface (WASI).” Why Fastly chose Rust for implementing Lucet Looking at Rust’s memory and thread safety guarantees, a supportive community, and a quickly evolving toolchain, many major projects are being written or rewritten in Rust. One of them is Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust to rewrite many key parts of Firefox under Project Quantum. More recently, Facebook chose Rust to implement its controversial Libra blockchain. And Fastly’s decision to choose Rust as Lucet’s implementation language was focused on security: “As for why we chose to write Lucet in Rust, the biggest reason was again safety. Writing compilers is complex work. Rust lets us take much of that complexity, describe it with types, and let the Rust compiler check our work in much deeper ways than other languages allow. It lets us focus on the problem we’re trying to solve, rather than the incidental issues of complex software.” Fastly on the future of Rust and WebAssembly In the past few years, Fastly seems to be focusing on Rust and WebAssembly. McMullen believes these languages will be central to the future and will impact key domains in tech. While Rust enables developers to write both highly efficient and safe code, WebAssembly gives you the flexibility of writing code in your choice of language and platform. “With our role in the internet, efficiency is of utmost importance. That’s why, traditionally, the type of software we build has been done with lower level languages like C and C++. We still, today, write and maintain quite a bit of software in C. There are some problems where C is still the correct option. That domain of C -- and to a lesser extent, processor-specific assembly code -- has been largely unassailable for decades as we’ve developed languages that make writing software faster and easier, but at the cost of efficiency. That’s been a great detriment to the entire industry because of how easy it is to write unsafe C code. We believe that Rust has finally been the language to change that. It allows us to write highly efficient code while also providing incredible safety. Now, WebAssembly. WebAssembly has the potential to provide something that we’ve never, in the history of computing, managed to accomplish: a common platform. It was designed to run in a browser, but manages to provide the other components that are needed: efficiency, safety, and platform-independence. We imagine a future in which a WebAssembly module can be run in a browser, on your watch, on your phone, on your TV, in the games you play, and inside server software. We’re still a ways off from that and many pieces are still needed. Lucet is our attempt at providing a WebAssembly compiler and runtime that is made to be used across many different use cases. The first one is Fastly’s edge, but we want to see many more.” Fastly on its other products and projects Limitations in the legacy CDNs that Fastly’s edge cloud platform addresses A CDN or Content Delivery Network consists of a geographically distributed group of servers that work together to ensure that content requested by a user reaches to them as fast as possible. However, it has many limitations like bulky XML based configuration files and specifications. McMullen adds, “Legacy CDNs suffer from a number of technical limitations that make them particularly ill-equipped to address changing consumer expectations, not to mention, developer and enterprise requirements. We’ve all had those online experiences when a site crashes or is non-responsive when we need it most, and our mission is to fuel the next modern digital experience, an experience that’s fast, secure, and reliable. By and large, traditional CDNs are black box solutions that are limited in their ability to provide real-time visibility and control, largely as a result of their outdated architecture, which adds cost and limits developers’ flexibility to expand on functionality.” Fastly’s edge cloud platform is not that -- rather, it aims to address these limitations by bringing data closer to the user. “As a result, developers have not been truly empowered to pursue digital transformations, despite many attempts for improvement within the industry,” he adds. What other projects by Fastly we should look forward to Fastly is continuously contributing towards making the internet better and safer by getting involved in projects like QUIC, Encrypted SNI, and standardizing WASI. Last year Fastly made three of its projects available on Fastly Labs: Terrarium, Fiddle, and Insights. When asked what else it is working on, McMullen shared, “Fastly Labs is heavily dependent on experimentation. If the experiment goes well and we think it’ll be useful for others, then we release it. We have quite a few experiments currently underway, and many of them are around the items listed in the question: ESNI, QUIC, WASI, as well as others like DNS-over-HTTPS. More iteration on what we have now is also in the cards. Lucet has come a long way, but it still has so much room to grow. Expect to see some pretty compelling developments in performance, safety, and features there.” Follow Tyler McMullen on Twitter: @tbmcmullen Learn more about Fastly and its edge-cloud platform at Fastly’s official website. Fastly open sources Lucet, a native WebAssembly compiler and runtime Fastly, edge cloud platform, files for IPO Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 6645