Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

122 Articles
article-image-pandas-answers-data-analysis-problems-interview
Amey Varangaonkar
24 Apr 2018
9 min read
Save for later

“Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou

Amey Varangaonkar
24 Apr 2018
9 min read
It comes as no surprise to many developers, Python has grown to become the preferred language of choice for data science. One of the reasons for its staggering adoption in the data science community is the rich suite of libraries for effective data analysis and visualization - allowing you to extract useful, actionable insights from your data. Pandas is one such Python-based library, that provides a solid platform to carry out high-performance data analysis. Ted Petrou is a data scientist and the founder of Dunder Data, a professional educational company focusing on exploratory data analysis. Before founding Dunder Data, Ted was a data scientist at Schlumberger, a large oil services company, where he spent the vast majority of his time exploring data. Ted received his Master’s degree in statistics from Rice University and has used his analytical skills to play poker professionally. He taught math before becoming a data scientist. He is a strong supporter of learning through practice and can often be found answering questions about pandas on Stack Overflow. In this exciting interview, Ted takes us through an insightful journey into pandas - Python’s premier library for exploratory data analysis, and tells us why it is the go-to library for many data scientists to discover new insights from their data. Key Takeaways Data scientists are in the business of making predictions. To make the right predictions you must know how to analyse your data. to perform data analysis efficiently, you must have a good understanding of the concepts as well be proficient using the tools like pandas. Pandas Cookbook contains step by step solutions to the master the pandas syntax while going through the data exploration journey (missteps et al) to solve the most common and not-so-common problems in data analysis. Unlike R which has several different packages for different data science tasks, pandas offers all data analysis capabilities as a single large Python library. Pandas has good time-series capabilities, making it well-suited for building financial applications. That said, its best use is in data exploration - to find interesting discoveries within the data. Ted says beginners in data science should focus on learning one data science concept at a time and master it thoroughly, rather than getting an overview of multiple concepts at once. Let us start with a very fundamental question - Why is data crucial to businesses these days? What problems does it solve? All businesses, from a child’s lemonade stand to the largest corporations, must account for all their operations in order to be successful. This accounting of supplies, transactions, people, etc., is what we call ‘data’ and gives us historical records of what has transpired in a business. Without this data, we would be reduced to oral history or what humans used for accounting before the advent of writing systems. By collecting and analyzing data, we gain a deeper understanding of how the business is progressing. In the most basic instances, such as with a child’s lemonade stand, we know how many glasses of lemonade have been sold, how much was spent on supplies, and importantly whether the business is profitable. This example is incredibly trivial, but it should be noted that such simple data collection is not something that comes naturally to humans. For instance, many people have a desire to lose weight at some point in their life, but fail to accurately record their daily weight or calorie intake in any regular manner, despite the large number of free services available to help with this. There are so many Python-based libraries out there which can be used for a variety of data science tasks. Where does pandas fit into this picture? pandas is the most popular library to perform the most fundamental tasks of a data analysis. Not many libraries can claim to provide the power and flexibility of pandas for working with tabular data. How does pandas help data scientists in overcoming different challenges in data analysis? What advantages does it offer over domain-specific languages such as R? One of the best reasons to use pandas is because it is so popular. There are a tremendous amount of resources available for it, and an excellent database of questions and answers on StackOverflow. Because the community is so large, you can almost always get an immediate answer to your problem. Comparing pandas to R is difficult as R is an entire language that provides tools for a wide variety of tasks. Pandas is a single large Python library. Nearly all the tasks capable in pandas can be replicated with the right library in R. We would love to hear your journey as a data scientist. Did having a master's degree in statistics help you in choosing this profession? Also tell us something about how you leveraged analytics in professional Poker! My journey to becoming a “data scientist” began long before the term even existed. As a math undergrad, I found out about the actuarial profession, which appealed to me because of its meritocratic pathway to success. Because I wasn’t certain that I wanted to become an actuary, I entered a Ph.D. program in statistics in 2004, the same year that an online poker boom began. After a couple of unmotivating and half-hearted attempts at learning probability theory, I left the program with a masters degree to play poker professionally. Playing poker has been by far the most influential and beneficial resource for understanding real-world risk. Data scientists are in the business of making predictions and there’s no better way to understand the outcomes of predictions you make than by exposing yourself to risk. Your recently published 'pandas Cookbook' has received a very positive response from the readers. What problems in data analysis do you think this book solves? I worked extremely hard to make pandas Cookbook the best available book on the fundamentals of data analysis. The material was formulated by teaching dozens of classes and hundreds of students with my company Dunder Data and my meetup group Houston Data Science. Before getting to what makes a good data analysis, it’s important to understand the difference between the tools available to you and the theoretical concepts. Pandas is a tool and is not much different than a big toolbox in your garage. It is possible to master the syntax of pandas without actually knowing how to complete a thorough data analysis. This is like knowing how to use all the individual tools in your toolbox without knowing how to build anything useful, such as a house. Similarly, understanding theoretical concepts such as ‘split-apply-combine’ or ‘tidy data’ without knowing how to implement them with a specific tool will not get you very far. Thus, in order to make a good data analysis, you need to understand both the tools and the concepts. This is what pandas Cookbook attempts to provide. The syntax of pandas is learned together with common theoretical concepts using real-world datasets. Your readers loved the way you have structured the book and the kind of datasets, examples and functions you have chosen to showcase pandas in all its glory. Was is experience, intuition, or observations that led to this fantastic writing insight? The official pandas documentation is very thorough (well over 1,000 pages) but does not present the features as you would see them in a real data analysis. Most of the operations are shown in isolation on contrived or randomly generated data. In a typical data analysis, it is common for many pandas operations to be called one after another. The recipes in pandas Cookbook expose this pattern to the reader, which will help them when they are completing an actual data analysis. This is not meant to disparage the documentation as I have read it multiple times myself and recommend reading it along with pandas Cookbook. Quantitative finance is one domain where pandas finds major application. How does pandas help in developing better financial applications? In what other domains does pandas find important applications and how? Pandas has good time-series capabilities which makes it well-suited for financial applications. It’s ability to group by specific time periods is a very useful feature. In my opinion, pandas most important application is with exploratory data analysis. It is possible for an analyst to quickly use pandas to find interesting discoveries within the data and visualize the results with either matplotlib or Seaborn. This tight integration, coupled with the Jupyter Notebook interface make for an excellent ecosystem for generating and reporting results to others. Please tell us more about 'pandas Cookbook'. What in your opinion are the 3 major takeaways from it? Are there any prerequisites needed to get the most out of the book? The only prerequisite for pandas Cookbook is a fundamental understanding of the Python programming language. The recipes progress in difficulty from chapter to chapter and for those with no pandas experience, I would recommend reading it cover to cover. One of the major takeaways from the book is to be able to write modern and idiomatic pandas code. Pandas is a huge library and there are always multiple ways of completing each task. This is more of a negative than a positive as beginners notoriously write poorly written and inefficient code. Another takeaway is the ability to probe and investigate data until you find something interesting. Many of the recipes are written as if the reader is experiencing the discovery process alongside the author. There are occasional (and purposeful) missteps in some recipes to show how often the right course of action is not always known. Lastly, I wanted to teach common theoretical concepts of doing a data analysis while simultaneously learning pandas syntax. Finally, what advice would you have for beginners in data science? What things should they keep in mind while designing and developing their data science workflow? Are there any specific resources which they could refer to, apart from this book of course? For those just beginning their data science journey, I would suggest keeping their ‘universe small’. This means concentrating on as few things as possible. It is easy to get caught up with a feeling that you need to keep learning as much as possible. Mastering a few subjects is much better than having a cursory knowledge of many. If you found this interview to be intriguing, make sure you check out Ted’s pandas Cookbook which presents more than 90 unique recipes for effective scientific computation and data analysis.    
Read more
  • 0
  • 1
  • 5299

article-image-understand-quickbooks-online-desktop-online-security-use-cases-and-more-with-crystalynn-shelton-a-certified-quickbooks-proadvisor
Vincy Davis
27 Dec 2019
8 min read
Save for later

Understand Quickbooks online/desktop, online security, use cases, and more with Crystalynn Shelton, a certified QuickBooks ProAdvisor

Vincy Davis
27 Dec 2019
8 min read
Quickbooks, the accounting software package developed and marketed by Intuit is targeted towards small and medium-sized businesses. It offers on-premises accounting applications and cloud-based versions that can undertake remote access capabilities like remote payroll assistance and outsourcing, electronic payment functions, online banking, and reconciliation, mapping features, and more. To know more about Quickbooks’ latest features and its learning curve for beginners, we did a quick interview with Crystalynn Shelton, a certified QuickBooks ProAdvisor and author of the book ‘Mastering QuickBooks 2020’. With more than 10 years of experience in Quickbooks, Shelton says Quickbooks is not only user-friendly but also cost-effective. Further, when asked about her views on QuickBooks online, Shelton points out that its live unlimited technical support is one of its main features  On Quickbooks, its benefits and use cases What are some of the advantages of Quickbooks that sets it apart from its competitors?  QuickBooks has a number of advantages that set them apart from its competitors. First, it is affordable for most small businesses. Whether you purchase an Online subscription (starting at $20/month) or a desktop product (starting at a one-time fee of $199), there is something for every budget. Another benefit of using QuickBooks is the program is very user-friendly. Most small business owners purchase the software and are able to set it up without having an IT person on staff.  In addition, there are a number of training videos, an extensive help menu within the program not to mention live tech support if you need it. Because QuickBooks is the most widely used accounting software program used by small businesses, most accountants and CPAs are familiar with the program. Some of these folks are certified ProAdvisors (like myself). They can offer consulting, training, and even bookkeeping services to small business owners who use QuickBooks. Can you elaborate on how small businesses can take benefit from Quickbooks? Also, how does Quickbooks simplifies tasks for them?  While there are numerous reasons why small businesses decide to use QuickBooks, there are five that tend to be the most common reasons: small businesses who can’t afford to hire a bookkeeper, small businesses who have outgrown the use of Excel spreadsheets and need a more sophisticated way to track income and expenses, small businesses who need financial statements in order to apply for a line of credit or business loan, small businesses whose tax professional will no longer accept a shoebox of receipts to file taxes. QuickBooks simplifies bookkeeping by allowing you to track all aspects of the business in one place: accounts payable, accounts receivable, income, and expenses. It uses simple language such as “people who owe you” (aka accounts receivable) or “what you owe to others” (aka accounts payable) to help business owners without prior bookkeeping knowledge comprehend the program. QuickBooks allows you to accept credit card payments from customers so you can get paid faster and easily reconcile payments to open invoices. Not to mention you can reduce (if not eliminate) manual data entry by connecting all of your business bank and credit card accounts to QBO.  Can you elaborate on how your book ‘Mastering QuickBooks 2020’ will prepare bookkeepers and accounting students in learning the ropes of QuickBooks? Also, how does the learning curve look like for users who have no bookkeeping knowledge and no experience with QuickBooks? This book was written with the assumption that the reader has no experience or knowledge of bookkeeping. We use simple language to explain how QuickBooks works and we have also provided screenshots to support the concepts being taught.  Chapter 1 includes a section that covers bookkeeping basics which will help non-accountants gain a better understanding of the terminology used in the field of accounting as well as QuickBooks. This information will help aspiring accountants build on their existing bookkeeping knowledge.  In addition, we have included the behind the scenes debits and credits for certain transactions to help accounting students prepare for the CPA exam or other academic tests. Shelton’s views on QuickBooks Online and Desktop What are your thoughts on QuickBooks Online and Quickbooks Desktop? What are the benefits of cloud accounting over Desktop? Do factors such as the size of an organization, or its maturity matter in choosing between the online and the desktop version? There are several benefits of using cloud accounting software over the desktop. Cloud accounting software allows you to manage your business from any device with an internet connection; whereas desktop limits you to a desktop computer. With cloud accounting software like QuickBooks Online, you can give anyone access to your QuickBooks data without them having to travel to your office. Cloud accounting software includes automatic real-time updates of your data. Unlike desktop software, you don’t have to worry about backing up your data with Online; its automatically done for you. Finally, QuickBooks Online includes unlimited live technical support. This is an invaluable feature for small business owners who are managing their own books and need the ability to get help when they need it. The size of an organization, structure, and length of time in business can definitely impact whether a business should choose QuickBooks Online or desktop. As a QuickBooks ProAdvisor, one of the first things I do is conduct an assessment to determine what the needs of my clients are. This involves documenting the details of their current processes (i.e. invoicing customers, paying bills, managing inventory, etc.)  Once I have this information, I am able to determine whether QuickBooks desktop is right or if QuickBooks Online is the best fit. If both products are ideal, I provide my clients with the downsides (if any) of going with one product over the other. This gives my clients all of the information they need to make an informed decision. On how Quickbook secures online data How does Quickbooks help in securing payments? How does QuickBooks keep online data safe? To secure payments, intuit transmits, support, protect, and access all cardholder information in compliance with the Payment Card Industry’s (PCI) data security standards. Additional security precautions Intuit has implemented are as follows: All data between Intuit servers and their customers is encrypted with at least 128-bit TLS, and all copies of daily backup data are encrypted with 256-bit AES encryption. Data is kept secure with multiple servers housed in Tier-3 data centers that have strict access controls and real-time video monitoring of the data center. All servers are hardened Linux installations, which are monitored in real-time and kept up-to-date with security patches. Can you suggest some best practices (at least five) that will help Quickbook aspirants in saving time and becoming a Quickbook pro? There are several ways you can save time and become proficient in QuickBooks Online. First, I recommend that you use QuickBooks on a daily basis. The more hands-on experience with QuickBooks, the more proficient you will become. Second, take the time to properly set up your QuickBooks account before you start entering transactions.  In Chapter 2, we provide you with a detailed checklist which includes what information you need to setup QuickBooks. By taking the time to set up customers, vendors, the chart of accounts, and your products and services upfront, the less time you will spend having to do it later on when you are trying to enter data. Third, all aspiring bookkeepers and accountants should get certified in QuickBooks Online. Certification is offered through Intuit and it is free.  As a Certified QuickBooks ProAdvisor, you get access to product discounts, marketing materials to promote bookkeeping services to prospective clients, a certification badge and designation you can put on business cards, websites and email signature lines.  Fourth, utilize keyboard shortcuts. They will save you time as you navigate the program. We have included a list of QBO keyboard shortcuts in the appendix of this book. Finally, connect as many bank and credit card accounts as you can to QBO. By doing so, you will reduce the amount of manual data entry required which will help you to keep your books up-to-date. If you want to learn how to build the perfect budget, simplify tax return preparation, manage inventory, track job costs, generate income statements and financial reports, check out Crystalynn’s book ‘Mastering QuickBooks 2020’. This book will work for a small business owner, bookkeeper, or accounting student who wants to learn how to make the most of QuickBooks Online. About the author Crystalynn Shelton is a licensed Certified Public Accountant, a certified QuickBooks ProAdvisor and has been certified in QuickBooks for more than 10 years. Crystalynn is currently a staff writer for Fit Small Business and an Adjunct Instructor at UCLA Extension where she teaches accounting, bookkeeping and QuickBooks to hundreds of small business owners and accounting students each year. Her previous experience includes working at Intuit (QuickBooks) as a Sr. Learning Specialist.  MongoDB’s CTO Eliot Horowitz on what’s new in MongoDB 4.2, Ops Manager, Atlas, and more New QGIS 3D capabilities and future plans presented by Martin Dobias, a core QGIS developer Greg Walters on PyTorch and real-world implementations and future potential of GANs Elastic marks its entry in security analytics market with Elastic SIEM and Endgame acquisition “The challenge in Deep Learning is to sustain the current pace of innovation”, explains Ivan Vasilev, machine learning engineer
Read more
  • 0
  • 0
  • 5174

article-image-discussing-sap-past-present-and-future-with-rehan-zaidi-senior-sap-abap-consultant-interview
Savia Lobo
04 Oct 2018
11 min read
Save for later

Discussing SAP: Past, present and future with Rehan Zaidi, senior SAP ABAP consultant [Interview]

Savia Lobo
04 Oct 2018
11 min read
SAP, the market-leading enterprise software, recently became the first European technology company to create an AI ethics advisory panel where they announced seven guiding principles for AI development. These guidelines revolve around recognizing AI’s significant impact on people and society. Also, last week, at the Microsoft Ignite conference, SAP, in collaboration with Microsoft and Adobe announced the Open Data Initiative. This initiative aims to help companies to better govern their data and support privacy and security initiatives. For SAP, this initiative will further bring advancements to its SAP C/4HANA and S/4HANA platforms. All of these actions emphasize SAP’s focus on transforming itself into a responsible data use company. We recently interviewed Rehan Zaidi, a senior SAP ABAP consultant. Rehan became one of the youngest authors on SAP worldwide when he was published in the prestigious SAP Professional Journal in the year 2001. He has written a number of books, and over 20 articles and professional papers for the SAP Professional Journal USA and HR Expert USA, part of the prestigious sapexperts.com library. Following are some of his views on the SAP community and products and how the SAP suite can benefit people including budding professionals, developers, and business professionals. Key takeaways SAP HANA was introduced to accelerate jobs 200 times faster while maintaining the efficiency. The introduction of SAP Leonardo brought in the next wave of AI, machine learning, and blockchain services via the SAP cloud platform and other standalone projects. Experienced ABAP developers should look forward to getting certified in one of the newest technologies such as HANA, and Fiori. SAP ERP Central Component (SAP ECC) is the on-premises version of SAP, and it is usually implemented in medium and large-sized companies. For smaller companies, SAP offers its Business One ERP platform. SAP Fiori is a line of SAP apps meant to address criticisms of SAP's user experience and UI complexity. Q.1. SAP is one of the most widely used ERP tools. How has it evolved over the past few years from the traditional on-premise model to keep up with the cloud-based trends? Yes. Let me cover the main points. SAP started in 1973 as a company and the first product SAP R/98 was launched. In 1979, SAP launched the R/2 design. It had most of the typical processes such as accounting, manufacturing processes, supply chain logistics, and human resources. Then came R/3  that brought the more efficient three-tier (Application server -  Database and the presentation (GUI)) architecture, with more new modules and functionalities added. It was a smart system fully configurable by functional consultants. This was further enhanced with Netweaver capability that brought the integration of the internet and SOA capability.  SAP introduced the ECC 5 and subsequently the ECC 6 Release. Mobility was later added that lets mobile applications running on devices to access the business processes in SAP and execute them. Both display and updation of SAP data was possible. HANA system was then introduced. It is very fast and efficient - allows you to do 200 times faster jobs than before Cloud systems then became available that let customers connect to SAP Cloud Platform via their on-premise systems and then get access to services such as Mobile Service for app protection, Mobile Service for SAP Fiori, among others. SAP Leonardo was finally introduced, as a way of bringing in next-gen AI, machine learning and blockchain services via standalone projects and the SAP cloud platform. Q.2. Being a Senior ABAP Programming Analyst, how does your typical day look like? Ahh. Well, a typical day! No two days are the same for us. Each morning we find ourselves confronting a problem whose solution is to be devised. A different problem every day- followed by a unique solution. We spend hours and hours finding issues in custom developed programs. We learn about making custom programs run faster. We get requirements of a wide variety of users. They may be in the Human Resource, Materials Management, Sales and Distribution or Finance, and so on. This requirement may be pertaining to an entirely new report or a dialog program having a set of screens. We even do Fiori ( using Javascript based library) applications that may be accessible from the PC or a mobile device. I even get requirements of teaching junior or trainee SAP developers on a wide variety of technologies. Q.3. Can you tell us about the learning curve for SAP? There are different job profiles related to SAP which range from executives to consultants and managers. How do each of them learn or update themselves on SAP? Yes, this is a very important question. A simple answer to this question is that “there is no end to learning and at any stage, learning is never enough,” no matter to which field within SAP you belong to. Things are constantly changing. The more you read and the more you work, you feel that there is a lot to be done. You need to constantly update yourself and learn about new technologies. There is plenty of material available on the internet. I usually refer to the Official SAP website for newer courses available. They even tell you for which background (managers, developers) the courses are relevant to. I also go to open.sap.com for new courses. Whether they are consultants (functional and technical), or managers, all of them need to keep themselves up-to-date. They must take new courses and learn about innovation in their technology. For example, HR must now study and try to learn about Successfactors. Even integration of SAP HANA with other software might be an interesting topic of today. There are Fiori and HANA related courses for Basis consultants and the corresponding tracks for developers. Some knowhow of newer technologies is also important for managers and executives, since your decisions may need to be adapted based on the underlying technologies running in your systems. You should know the pros and cons of all technologies in order to make the correct move for your business. Q.4. Many believe an SAP certification improves their chances of getting jobs at competitive salaries. How important are certifications? Which SAP certifications should a buddying developer look forward to obtain? When I did my Certification in October 2000, I used to think that Certifications are not important. But now I have realized, yes, it makes a difference.  Well, certifications are definitely a plus point. They enhance your CV and allow you to have an edge over those who are not certified.  I found some jobs adverts that specifically mention that certification will be required or will be advantageous. However, they are only useful when you have at least 4 years of experience. For a fresh graduate, a certification might not be very useful. A useful SAP consultant/developer is a combination of solid base/foundation of knowledge along with a touch of experience. I suggest all my juniors to go for Certifications in order to strengthen concepts, which include: C_C4C30_1711 - SAP Certified Development Associate – SAP Hybris Cloud for Customer C_CP_11 - SAP Certified Development Associate - SAP Cloud Platform C_FIORDEV_20 - SAP Certified Development Associate - SAP Fiori Application Developer C_HANADEV_13 - SAP Certified Development Associate - SAP HANA C_SMPNHB_30 - SAP Certified Development Associate - SAP Mobile Platform Application Development (SMP 3.0) C_TAW12_750 - SAP Certified Development Associate - ABAP with SAP NetWeaver 7.50 E_HANAAW_12 - SAP Certified Development Specialist - ABAP for SAP HANA For experienced ABAP developers, I suggest getting certified on the newest technologies such as HANA, and Fiori. They may help you get a project quicker and/or at a better rate than others. Q.5. The present buzz is around AI, machine learning, IoT, Big data, and many other emerging technologies. SAP Leonardo works on making it easy to create frameworks for harnessing the latest tech. What are your thoughts on SAP Leonardo? Leonardo is SAP’s response to an AI platform. It should be an important part of SAP’s offerings, mostly built on the SAP cloud platform. SAP has relaunched Leonardo as a digital innovation system. As I understand it, Leonardo allows customers to take advantage of artificial intelligence (AI), machine learning, advanced analytics and blockchain on their company’s data. SAP gives customers an efficient way of using these technologies to solve business issues. It allows you to build a system which, in conjunction with machine learning, searches for results that can be combined with SAP transactions. The benefit with SAP Leonardo is that all the company’s data is available right in the SAP system. Using Leonardo, you have access to all human resources data and any other module data residing in the ERP system. Any company from any industry can make use of Leonardo; it works equally well for retailers, food and beverage companies and medical industries, for organizations working in retail, manufacturing and automotive. An approach that works for one company in a given industry can be applied to other companies in that industry. Suppose a company operates sensors. They can link the sensor data with the data in their SAP systems and even link that with other data, and they can then use the Leonardo capabilities to solve problems or optimize performance. When a problem for one company in an industry is solved, a similar solution may be applied to the entire industry. Yes, in my opinion, Leonardo has a bright future and should be successful. For more information about Leonardo success stories, I encourage readers to check out SAP Leonardo Internet of Things Portfolio & Success Stories. Q. 6. You are currently writing a book on ABAP Objects and Design Patterns expected to be published by the end of 2018. What was your motivation behind writing it? Can you tell us more about ABAP objects? What should readers expect from this book? ABAP and ABAP Objects has gone tremendous changes since some time both on the features (and capability) as well as the syntax. It is the most unsung topic of today. It has been there for quite long but most developers are not aware of it or are not comfortable enough to use them in their day to day work. ABAP is a vast community with developers working in a variety of functional areas. The concepts covered in the book will be generic, allowing the learner to apply them to his or her particular area. This book will cover ABAP objects (the object-oriented extension of the SAP language ABAP) in the latest release of SAP NetWeaver 7.5 and explain the newest advancements. It will start with the programming of objects in general and the basics of ABAP language the developer needs to know to get started. The book will cover the most important topics needed on everyday support jobs and for succeeding in projects. The book will be goal-directed, not a collection of theoretical topics. It won’t just touch on the surface of ABAP objects, but will go in depth from building the basic foundation (e.g., classes and objects created locally and globally) to the intermediary areas (e.g., ALV programming, method chaining, polymorphism, simple and nested interfaces), and then finally into the advanced topics (e.g., shared memory, persistent Objects). The best practices for making better programs via ABAP objects will be shown at the end. No long stories, no boring theory, only pure technical concepts followed by simple examples using coding pertaining to football players. Everything will be presented in a clear, interesting manner, and readers will learn tips and tricks they can apply immediately. Learners, students, new SAP programmers and SAP developers with some experience can use this as an alternative to expensive training books. The book will also save reader’s time searching the internet for help writing new programs. Knowing ABAP objects is key for ABAP developers these days to move forward. Starting from simple ALV reporting requirements, or defining and catching exceptional situations that may occur in a program or even the enhancement technology of BAdIs that lets you enhance standard SAP applications require sound ABAP Objects understanding. In addition, Web Dynpro application development, the Business Object Processing Framework, and even OData service creation to expose data that can be used by Fiori apps all demand solid knowledge of ABAP objects. How to perform predictive forecasting in SAP Analytics Cloud Popular Data sources and models in SAP Analytics Cloud Understanding Text Search and Hierarchies in SAP HANA  
Read more
  • 0
  • 0
  • 5130

article-image-functional-python-programming-interview-steven-lott
Aaron Lazar
17 May 2018
8 min read
Save for later

Why functional programming in Python matters: Interview with best selling author, Steven Lott

Aaron Lazar
17 May 2018
8 min read
Python is currently one of the most popular and desired programming languages. Primarily for its simplicity, agility and ability to be used across a variety of software development projects. Although an Object Oriented language, Python supports an array of programming paradigms, including Functional Programming. Functional Programming is a paradigm that treats computation as the evaluation of math functions. It’s quite advantageous as it allows for efficient parallel programming and error-free code. We recently interviewed Steven Lott, a true Python professional and best-selling author of a number of Python books. Steven talks a bit about modern Python and how the language adapts well to the Functional paradigm, offering developers a range of solutions to build modern, cloud based applications. He also talks about his most recent book, the second edition of his best seller, Functional Python Programming, and how it will benefit developers looking to enter the world of functional programming with Python. About Steven Steven F. Lott has been programming since the 70s, when computers were large, expensive, and rare. As a contract software developer and architect, he has worked on hundreds of projects, from very small to very large. He's been using Python to solve business problems for over 10 years. Steven is a technomad who lives in various places on the east coast of the U.S. Follow his technology blog to stay updated on the latest trends in tech. You may also connect with him on LinkedIn. Key Takeaways Why learn Python? One of the major reasons developers appreciate Python, is because it’s simple, and has the ability to create succinct and expressive programs. For example, data scientists prefer Python because they can build sophisticated analytical tools using simple functions and produce useful results. Why Functional programming? The functional programming paradigm forms the perfect foundation for developers and architects to build and design modern architectures like Serverless. But Python isn’t inherently functional. Although an object oriented language at heart, Python can create higher order functions and other functional features. Python 3 has made this easier. Steven’s Python 4 wish list: In future versions of Python, Steven hopes to see a wider use of Unicode operator characters like × in addition, * for multiplication, and ÷ in addition to / for division. Also, he expects PyPy and RPython projects to be more widely used; future Pythons versions will benefit from optimizations and restructuring the interpreter. One of the most helpful features of Python 3 are type hints, which allow one to write clear and implicitly documented code while preventing the invoking of methods with wrong data types. Steven’s latest edition of Functional programming with Python, explores type hints in depth along with core language features such as lambdas, generator expressions, functions, and callable objects. Full Interview Python is one of the top programming languages. List down top 3 features of Python that make programmers love it.. It seems like programmers love Python primarily because it allows them to create succinct, expressive programs. Before long, they learn the vast library of code - is another reason for Python's immense popularity. Many people adopt Python because of the low barrier to entry: it really is as simple as download and start working. You've been working with Python for over a decade. How has your experience been with Python as a primary development language? Over my 40-year career, I've used a variety of languages. And I've found Python to be extremely productive. A team can build and deploy microservices-based applications at a tremendous pace. Data scientists can build sophisticated analytical tools using simple functions to produce useful results without the overheads of complex compile and build environments. Back when Python 3 just came into existence, we saw certain resistance to the notion of Python being apt for functional programming. How would you say Python has progressed since then? At its core, Python is an object-oriented language. Consequently some functional features aren't central. One of the essential functional design tools -- creating higher-order functions -- has always been part of Python. The wider use of generator functions in Python 3 has made functional Python programming much more common. Many Python applications are hybrids, mixing object-oriented and functional features of the language. Now that type hints are available, it becomes practical to use mypy to confirm that the code is very likely to work properly. For a developer who's picking up the Functional Programming paradigm for the first time, what do you think are the prerequisites? Functional programming is closely aligned to the core mathematical ideas of functions and functional composition. As a consequence, a minimal background in programming could be advantageous to help leverage essential function definitions and avoid needless state change. For programmers already heavily invested in procedural programming, it may be helpful to set the idea of stateful objects aside. In the above context, how does your book, Functional Python Programming, Second Edition, prepare its readers to be industry ready? What are the key takeaways for readers from your title and how does it help with the learning curve? The examples in the book are related to exploratory data analysis, an important skill in the broader area of data science. They also focus on the standard library, allowing someone to apply the functional design approach to other libraries and tools. I think a focus on the core language features (e.g., lambdas, generator expressions, functions, callable objects) provide a foundation that allows a programmer to apply the core ideas more widely to different kinds of problems and other software packages. What new and updated content is available in this edition, for developers who've purchased your previous book? Almost all of the examples have been rewritten to include type hints. This can be an important quality check helping to ensure the Python code works. When used with doctest examples, it becomes relatively easy to provide reliable, correct code. In a few cases, external packages (i.e., the pymonad library) don't have type hints and the examples reflect this gap. Can you throw some light on Functional Reactive Programming and how FRP with Python is boosting the implementation of modern architectures like Cloud Native and Serverless? The central idea of serverless programming -- a collection of isolated functions -- fits the functional programming paradigm very elegantly. The processing is generally stateless, with stand-alone functions waiting for their inputs. Ideas like "choreography" of web services work with this idea of stateless functions that respond to an input by producing an output. This leads to careful separation of persistence and state change from the other transformational processing. This helps create software with easy-to-understand behavior and implementation code that's very expressive of the algorithm. As Python inches towards a 4.0, what do you think should/can be changed/rectified in the language, for the better? At some point, I expect the PyPy and RPython projects to create some optimizations leading to a fundamental restructuring of the interpreter. Perhaps these changes could remove the need for the GIL (Global Interpreter Lock) by exposing a minimal kernel of code that requires exclusive access to internal data structures. Of more general interest, I'd hope to see wider use of Unicode operator characters like × in addition to * for multiplication, and ÷ in addition to / for division. Perhaps ∻ could be adopted for truncated division. This could also lead to use of ℝ instead of float and ℤ instead of int providing a more mathematical look to type hints. It would be nice to expand the use of the Unicode character set to create more readable programs. List down 3 reasons for developers to choose your book as the book of choice for Functional programming in Python. Programmers who want to create succinct and expressive code often find functional design to help them fulfill this. The Functional Python Programming book provides extensive examples that show multiple ways to achieve this goal. In many cases, generator functions and lazy processing can be a large performance improvement. A generator function can use less memory than a large data collection, and this change can be helpful. This book will provide number of examples of lazy processing to avoid creating large, in-memory collections. It can be difficult to get started with type hints. The use case in this book show type hints in somewhat more complex real-world situations. If you enjoyed this interview, head over to check out Steven’s latest edition of Functional Python Programming, He is leveraging Python to implement microservices and ETL pipelines. His other titles with Packt Publishing include Python Essentials, Mastering Object-Oriented Python, Functional Python Programming, and Python for Secret Agents. What is the difference between functional and object oriented programming? Building functional programs with F# Seven wrongs don’t make the one right: Solving a problem with Functional Javascript
Read more
  • 0
  • 0
  • 5101

Banner background image
article-image-the-vue-js-community-is-one-of-vues-biggest-selling-points-marina-mosti-on-vue-and-javascript-in-2019-interview
Richard Gall
08 Nov 2019
9 min read
Save for later

"The Vue.js community is one of Vue's biggest selling points" - Marina Mosti on Vue and JavaScript in 2019 [Interview]

Richard Gall
08 Nov 2019
9 min read
Vue occupies an interesting position in the triumvirate of frontend JavaScript frameworks. Not hyped to the extent that React is, and not as established as Angular, it’s spent the last couple of years quietly minding its business and building an engaged and enthusiastic community of developers. One of these developers is Marina Mosti: her book Building Forms with Vue.js is just the latest step in her career journey from a backend developer getting frustrated with Java to Vue.js evangelist and educator. She’s a great person to explain the attraction of Vue.js, and to provide an insight into how she first entered the community - luckily, I was able to chat with her. Buy Building Forms with Vue.js on the Packt store. Her background is interesting: “I actually started out as a PHP developer and found myself in a position where I forced myself into learning front end. It’s not until very recently that I started doing front end in terms of it being my main focus” she says. Rather than moving deeper down into the stack, she has gone the other way, gravitating towards the front end. That might not be completely conventional, but it’s also indicative of the evolution of both JavaScript and frontend development in general. Read the first chapter of the book for free on the Packt platform. Rethinking JavaScript Things today, Marina suggests, are quite different. She’s quick to tell me, for example, that “the current state of JavaScript is just so much better than what it used to be,” and recalls a general antipathy towards JavaScript in the early years of her career that she “dragged… around for many years.” “I went to this very small school where they taught us the basics of front end development” she explains. “And for good or for bad the teachers were always very adamant about saying ‘oh don’t bother learning vanilla JavaScript because we have jQuery and vanilla JavaScript is so bad that you’re not gonna use it.’” However, the framework boom changed this. “I got to a point where all these cool frameworks were coming out and you just realise ‘hey I don’t know javascript; I know jquery so how do I make this jump?’” This ‘jump’ wasn’t without challenges. “Just having to go back and learn the basics - that has been the most ‘challenging’ part of going onto front end - because the front end that I knew [at the time] was php side-generated HTML code with maybe a little bit of javascript, maybe some CSS.” Read next: Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API Laravel and Vue.js Marina explains that she first encountered Vue while using Laravel. “We... wanted to take advantage of this built-in connection that Vue had with Laravel. Obviously this didn’t involve any fancy set up, there was no vue CLI, nothing of this good pre-compiling or anything, it was just injecting vue into the html and creating the components there on the fly” However, discovering Vue in this way proved to be revelatory: it actually underlined what makes it, in Marina’s view, a great front end framework. “You don’t really have to commit to the whole framework,” she says; “it just got the job done for what I needed at that moment.” What problems can Vue.js solve? Marina has a pretty clear perspective on the challenges in front end development. “The problems we’re currently facing in front end development... is that people don’t want the old browsing experience where you’re clicking around and having to wait for reloads” she says. “We want to make applications… that are very performant, they have a great user experience, you are not waiting for page refreshes - it flows. What we are looking for is having your applications flow in a way that makes sense, like if you were using a desktop application.” Vue is particularly good for this, Marina explains, because of its “reusable component structure.” To a certain extent, this way of working is just another example of the wider trend across engineering towards breaking things apart. “You are trying to make these small units of code, that do this specific thing… a very common way to describe it is like this lego system where you’re just putting pieces on top of each other and it just starts making sense.” I’ve heard that analogy before in relation to containers - there’s clearly a recurring theme that’s evolving out of core principles of design. If Vue is well-suited to building lightweight but highly performant front ends, another important element is that it makes development relatively easy. Again, Marina contrasts working with Vue today with what JavaScript development looked like in the past. “You used to have... these massive massive amounts of code, and code separation of concerns was just very complicated to manage. You had to [do] a lot of overhead and work... trying to figure out how are we going to structure these files so that it makes sense.” The Vue CLI Tooling was also more complicated; something that the Vue CLI has helped to solve.“You had to deal with - at a very intricate level - how tools like Grunt worked, for example. And now you have like these pre-build tools like the Vue CLI which allow you to not have to really think about things," she says. “You don’t have to think like ‘hey, how is this going to get compiled? How is webpack going to figure things out?’ At least not at an entry level because you have it all neatly packed in this box for you with the Vue CLI.” Comparing Vue.js to React and Angular Although it’s clear that Marina is incredibly passionate and enthusiastic about Vue, she’s also circumspect about ranking JavaScript frameworks against each other. “All 3 frameworks are fantastic. All 3 get the job done. This is like asking someone why is this your favorite flavor of ice cream?!” Vue.js v. React She notes that React and Vue have a lot in common. “They share a lot of similarities - both of them use a virtual DOM… they both have this reactive component structure.” However, the key difference, from Marina’s perspective, is JSX. “If you’re talking about React, you’re talking about JSX, this approach where everything is javascript. You’re writing HTML inside the JavaScript, you’re writing CSS inside the JavaScript.” It’s JSX that puts Marina off React, saying that the way it requires you to work “doesn’t really click. I know how to do it, but just in the way I like to code things I prefer just having the separation where HTML is HTML, and where CSS is CSS.” Want to learn React.js? Search the latest React eBooks and videos. Read next: Ionic React released; Ionic Framework pivots from Angular to a native React version Vue.js v. Angular Angular, meanwhile, is “great for enterprise projects where you need this huge huge framework” says Marina. “But that also comes at a cost of having to know all the framework. All the libraries, everything that Angular brings to the table, you have to know TypeScript - it's just very opinionated at what it does and sometimes the shoe is going to be very big for the project.” So, for Marina, Vue has a degree of flexibility. It’s not as opinionated as Angular, and it doesn’t require you to write using JSX. “It can grow up until the point you need it to… From the smallest component in your application to powering full enterprise solutions.” And related to this, it means Vue is accessible - the learning curve isn’t that steep. “Vue is just very gentle in that you can start using it; you can start building things right away.” “There's a very good pay off in making yourself an expert in it... once you start getting into the core of Vue, and understanding all the little tools that are at your disposal… you can start building upon this knowledge... the framework can grow and adjust to what you’re needing.” Search the latest Angular eBooks and videos. The Vue.js community There are undoubtedly many technical reasons to consider using Vue. However, another aspect that Marina emphasises throughout our conversation is how welcoming and supportive the Vue community is. “The Vue community is one of the biggest selling points of why you should pick Vue and why Vue is so amazing.” The Vue community was, Marina says, integral in getting her to where she is now. “I was, at one point, not very into vue… and I just found a very welcoming community and a very inclusive community... People in the community care about other developers that are getting into Vue. We try our best to make this feel like a very safe, inclusive community to just get people in here and get them developing vue and help them out with the problems they’re having.” Marina deserves credit for playing a part in fostering a welcoming and supportive culture. Not only has she created a wealth of learning materials (such as a great free introductory tutorial series), she also works closely with Vue Vixens, and provides mentoring and support for other women finding their way in the industry. “This focus on education just basically became my goal… Hey, let’s do things to teach people, to get more people involved with vue,” she says. In an industry that’s sometimes defined by hyper-competitiveness, and marred by toxicity, it’s certainly a worthwhile and important goal. It’s something we can all work at. Follow Marina Mosti on Twitter: @MarinaMosti Follow Vue Vixens on Twitter: @VueVixens
Read more
  • 0
  • 0
  • 5071

article-image-security-experts-wolf-halton-and-bo-weaver-discuss-pentesting-and-cybersecurity-interview
Guest Contributor
18 Jan 2019
4 min read
Save for later

Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity [Interview]

Guest Contributor
18 Jan 2019
4 min read
This is Part 2 of the interview with our two Kali Linux experts, Wolf Halton, and Bo Weaver, on using Kali Linux for pentesting. In their section, we talk about the role of pentesting in cybersecurity. Previously, the authors talked about why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about the advantages and disadvantages for using Kali Linux for pentesting. There also talked about their love for the Kali platform. Wolf says, “Kali is a stable platform, based upon a major distribution with which I am very familiar.  There are over 400 security tools in the Kali repos, and it can also draw directly from the Debian Testing repos for even more tools.” Here are a few more questions, we asked them about what they think about pentesting in cybersecurity, in general. Can you tell us about the role of pentesting in cybersecurity? According to you, how has pentesting improved over the years? Bo Weaver: For one thing, pentesting has become an accepted and required practice in network security.  I do remember the day when the attitude was “It can’t happen here. so why should you break into my network?  Nobody else is going to.” Network security, in general, wasn’t even thought by most companies and spending money on network security was seen as a waste. The availability of tools has also grown in leaps and bounds.  Also, the availability of documentation on vulnerabilities and exploits has grown, and the awareness in the industry of the importance of network security has grown. Wolf Halton: The tools have gotten much more powerful and easier to use.  A pentester will still be more effective if they can craft their own exploits, but they can now craft it in an environment of shared libraries such as Metasploit, and there are stable pentesting platforms like Kali Linux Rolling (2018) that reduces the learning curve to being an effective pentester. Pentesting is rising as a profession along with many other computer-security roles.  There are compliance requirements to do penetration tests at least annually or when a network is changed appreciably. What aspects of pentesting do you feel are tricky to get past? What are the main challenges that anyone would face? Bo Weaver: Staying out of jail.  Laws can be tricky. You need to know and fully understand all laws pertaining to network intrusion for both, the State you are working in and the Federal laws.  In pen testing, you are walking right up to the line of right and wrong and hanging your toes over that line a little bit. You can hang your toes over the line but DON’T CROSS IT!  Not only will you go to jail but you will never work in the security field again unless it is in some dark corner of the NSA. [box type="shadow" align="" class="" width=""]Never work without a WRITTEN waiver that fully contains the “Rules of Engagement” and is signed by the owner or “C” level person of the company being tested.[/box] Don’t decide to test your bank’s website even if your intent is for good.  If you do find a flaw and report it, you will not get a pat on the back but will most likely be charged for hacking.  Especially banks get real upset when people poke at their networks. Yes, some companies offer Bug Bounty programs. These companies have Rules of Engagement posted on their site along with a waiver to take part in the program.  Print this and follow the rules laid out. Wolf Halton: Staying on the right side of the law.  Know the laws that govern your profession, and always know your customer.  Have a hard copy of an agreement that gives you permission to test a network.  Attacking a network without written permission is a felony and might reduce your available career paths. Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Bo Weaver is an old school ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on a R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990's and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint a Atlanta based security consulting company. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 4914
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-interview-with-marco-matic-marco-ryan-augmented-reality-artist
Sugandha Lahoti
18 Sep 2018
11 min read
Save for later

“As Artists we should be constantly evolving our technical skills and thought processes to push the boundaries on what's achievable,” Marco Matic Ryan, Augmented Reality Artist

Sugandha Lahoti
18 Sep 2018
11 min read
Augmented and Virtual Reality is taking on the world, one AR app at a time. Almost all tech giants have their own AR platforms, whether it be Google’s ARCore, to Apple’s ARKit, to Snapchat’s Lens Studio. Not just that, there are now various AR galleries and exhibits focused solely on AR, where designers and artists showcase their artistic talents combined with rich augmented reality experiences. While searching for designers and artists working the augmented reality space, we stumbled upon Marc-O-Matic aka Marco Ryan. We were fascinated by his artwork and wanted to interview him right away.   [embed]https://vimeo.com/219045989[/embed] As his website bio says, “Marco is a multidisciplinary Artist, Animator, Director, Storyteller and Technologist working across Augmented and Virtual Reality technologies.” He shared with us his experiences working with Augmented Reality and told us about his creation process, his current projects, tips and tricks for a budding artist venturing into the AR space and his views on merging technology and art. Key Takeaways A well-rounded creative is someone who could do everything from creating the art, the associated stories as well as execute the technical aspects of the work to make it come to life. The future belongs to these creative types who love to learn and experiment every day. An artist must consider three aspects before taking on a project. First, how to tell their story in a more engaging and interesting manner. Second, how to combine different skill sets together and third the message their work conveys. Augmented Reality has added a new level of depth to the art experience not just for artists but also for viewers. Everyone has a smartphone these days and with AR you can add many added elements to an art piece. You can add sound, motion and 3D elements to the experience, which affect more of your senses. It is easy for a beginner artist to get started with creating Augmented reality art. You need to start with learning a basic language (mostly C# or Javascript) and then you can learn more as you explore. Tools and platforms such as Adobe After Effects, Maya and Blend, are good for building shaders, materials or effects. Unity 3D is a popular choice for implementing the AR functionality, and over 91% of HoloLens experiences are made with Unity. It supports a large number of devices and also has a large community base. Unity offers highly optimized rendering pipeline and rapid iteration capabilities to create realist AR VR experiences. AI and Machine Learning are at the forefront of innovating AR/VR. Machine learning algorithms can be used in character modeling. They can map your facial movements directly to a 3D character’s face. AI can make a character emotionally react based on the tone of the audio and also automate tasks allowing an artist to focus solely on the creative aspects. Full Interview How did you get started in AR. How did you journey from being an artist to an AR artist began? I started experimenting with Immersive AR/VR tech about 2-3 years ago? At the time buzzwords like virtual and augmented reality started trending and it had me curious to see how I could implement my own existing skills into this area at the time. Prior to getting into AR and VR I came from a largely self-taught background in Art & Illustration which then evolved into exploring film and animation and then finally led to game design and programming. Exploring all these areas allowed me to become a well-rounded creative, meaning I could do everything from creating the art, the associated stories as well as execute the technical aspects of the work to make it come to life. That’s quite impressive and fascinating to know! What programming language did you learn first and why? What was your approach to learning the language? Do you also dabble in other languages? How can artists overcome the fear of learning technology? Working with the game engine Unity 3D to create my VR and AR Experiences, the engine allows you to program in either C# or Javascript. C# was the the most prevalent language of the two and for me I found it easier to pick up. As a visual learner, I initially came to understand the language through node based programming tools like Playmaker and Uscript. These plug-ins for Unity are great for beginners as they allow you to visually program behaviors and functionality into your experience by creating interconnected node trees or maps which then generates the code. I’m familiar with this form of logic building as other programs I use, such as Adobe After Effects, Maya and Blend, use similar systems for building shaders, materials or effects. Through node based programming or visual scripting, you can take a look under the hood and understand the syntax iteratively. It’s like understanding a language through reverse-engineering. By seeing the code being generated based on the node maps you create you could quickly understand how the language is structured. I try to think of learning new things as adding skills to your arsenal and the more skills you possess the greater you can expand on your ideas. I don’t think it’s so much a fear of learning new technology. I think it’s more a question of ‘Will learning this new technology be worth my time and benefit my current practice?’ The beautiful age we live in makes it so much easier to learn new skills online. So there’s so much support already available for anyone wanting to explore game design technologies in their creative practice. What tools do you use in the process and why those tools? What is the actual process you follow? What are the various steps? The Augmented Reality Art I created has two parts to them: Creating the Art Everything is drawn on paper first using your typical office ballpoint pens, inkwash and sometimes watercolours or gouache. I prefer creating my work that way. To me keeping that 'hand-crafted' style and aesthetic has its charm when bringing it to life through Augmented Reality. I think it's really important to demonstrate the importance of other traditional mediums, techniques and disciplines when working with immersive technologies as they're still very valid in creating engaging content. I see a lot of AR experiences that have a lack of charm and feel somewhat 'gamey' which is why I want to continue integrating more raw-looking aesthetics in this area. Augmented Representation & Implementation Once the art is planned and created, I then scan it and start splicing it up digitally. From the preserved textures and linework I then create a 3D representation of the artwork through 3D modeling and animation software called Blender. It's one of many 3D authoring tools out there. It not only packs a significant number of features, it's also free which makes it ideal for beginners and students. Once the 3D representation of the work is created it's then imported into a game engine called Unity3D where I implement the AR functionality. Unity 3D is a widely used game engine. It's one of many engines out there but what makes it great is its support to deploy to all manners of devices. It also has a large community base behind it should you need help. How long does a typical project take? Do you work by yourself or with a team? How long do you work? On average an Augmented Artwork may take anywhere from 3 to 4 weeks to create, which includes the Art, Animation, Modeling and AR Implementation. When I first started out it'd take much longer but overtime I've streamlined my own processes to get things done faster. My alias Marc-O-Matic tends to be mistaken as a team but I'm actually just one person. I much prefer being able to create and direct everything myself. What is your advice to a budding artist who is interested in learning tech to produce art? From my experience, don't confine yourself to one specific medium and try practising a different skill or technique everyday. As artists we should be constantly evolving not just our technical skills but also our thought processes to push the boundaries on what's achievable. 'How can I tell my story in a more engaging and interesting manner?' 'What can I create if I combine these different skill sets together?" "How do I bring an end to systematic racism and white privilege?' etc. The more knowledge you have the greater you can push the boundaries of your creative expression. How do you support yourself financially? As in do you also sell your art pieces or make art for clients etc.? Where do you sell stuff, and meet people? I work under my own Artist Entity 'Marc-O-Matic.' I much prefer working independently than within larger studios or agencies. I juggle a balance between commercial work and personal projects. There's generally a high demand for Augmented/Virtual Reality experiences and I'm really lucky and grateful that clients want content in my particular style. I work with a variety of organisations generally consulting, providing creative direction and in most cases building the experiences too. Aside from the commercial work, I'm currently touring my own Augmented Reality Art/Storytelling collection called 'Moving Marvels' where audiences get to see my illustrated works come to life right in front of them. It's also how I sell various limited edition prints of my augmented artworks. My collection is exhibited at tech/innovation conferences, symposiums, galleries and even at universities/educational institutes. It's a great way to make connections and demonstrate what immersive technologies can do in a creative capacity. No interview would be complete without a discussion on AI and machine learning. As a technophile artist, are you excited about merging AI and art in new ways? What are some of your hopes and fears about this new tech? Do you plan to try machine learning anytime soon? It’s like a double edged sword with + 5 to Creativity. Technology can enhance our creative abilities and processes in a number of ways. At the same time it can also make some of us lazy because we can become so reliant on it. Like any new technology that aims to assist in the creative process, there’s always the fear that technology will make creatives lazy or will replace professions altogether. In some ways technology has done this but from saying that it has also created new job opportunities and abilities for artists. In areas like character animation for example the assistance of machine learning algorithms means creatives can worry less about the laborious physical processes of rigging and complex animation and focus more on the storytelling. For example, we can significantly reduce production times in the areas of facial rigging through facial recognition. Through learning the behaviour and structure of your own face, machine learning algorithms can map your facial movements directly to a 3D character’s face. By simply recording your own facial movements and gestures, you’ve created an entire impression map for your 3D character to use. What’s also crazy is, on top of that, by running a voice over track on that 3D character, you can also train it to play it out to sync with the voice AS well as have the entire face emotionally react based on the tone of the audio. It’s game-changing but also terrifying stuff as this sort of technology can be used to create highly realistic fake impressions of real people. As also an animator, I’ve started experimenting with this technology for my own VR animations. What would take me animator hours or even days to animate a talking 3D character can now take mere minutes. Author Bio Marc-O-Matic is the moniker of Marco Matic Ryan. He is a multidisciplinary Artist, Animator, Director, Storyteller and Technologist working across Augmented and Virtual Reality technologies. He is based in Victoria, Australia. https://player.vimeo.com/video/125218939 Unity plugins for augmented reality application development. Google ARCore is pushing immersive computing forward. There’s another player in the advertising game: augmented reality.
Read more
  • 0
  • 0
  • 4912

article-image-kong-cto-marco-palladino-on-how-the-platform-is-paving-the-way-for-microservices-adoption-interview
Richard Gall
29 Jul 2019
11 min read
Save for later

Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview]

Richard Gall
29 Jul 2019
11 min read
“The service control platform is the next-gen of traditional API management,” Kong CTO and co-founder Marco Palladino tells me. “It’s not about APIs any more, it’s about services.” This shift in the industry is what makes Kong so interesting. It’s one of the reasons I wanted to speak to Palladino. Its success is an index of businesses’ technological priorities today, and useful as an indicator of the way the world is going - one, it’s safe to say, that’s increasingly cloud-native and highly distributed. As part of a broad and growing cloud-native ecosystem, Kong is playing an important role in the digital transformation of thousands of companies around the world. Furthermore, the fact that it follows an open core model, with an open source version of Kong made available in 2015, underlines the way in which the platform occupies a valuable position in the valley between developer enablement and managerial control. This isn’t always an easy place to be. 'Digital transformation' is a well-worn phrase, but behind it is the messy truth about the reality of how companies use technology: at their own pace and often shaped by necessity rather than best practice. So, with Kong a useful proxy for the state of the software industry, I wanted to dive deeper into Kong’s challenges, as well as the opportunities the platform can potentially unlock for its users. What is Kong? Before going any further it’s probably worth explaining what Kong actually is. Essentially, Kong is an API management platform - it allows teams to manage how services interact and move within their architecture. [caption id="attachment_29326" align="alignright" width="248"] via konghq.com[/caption] “APIs allow information to be in flight within our systems,” Palladino explains. Information can, he continues, either be “at rest in a database” or “in use by a monolith or microservice.” Naturally then, it follows that “the more we decouple - the more we distribute our applications - the more information will be… in flight.” This is why Palladino believes Kong is so valuable today. The “flight” of information (he never uses the word “data”) necessarily implies a network and, as anyone familiar with L. Peter Deutsch’s 7 Fallacies of Distributed Computing will know, “the network is unreliable.” “So how do we protect that communication? How do we secure it? How do we connect it? How do we route that transmission?” Palladino says. “The more we decouple, the more we distribute, the more those problems become critical, if not essential, for a successful microservice organization… what Kong provides is a platform that allows us to intelligently broker the flow of information across the organization.” Why does the world need Kong? Do we really need another API management solution? The short answer to this is relatively straightforward: the world is moving toward (micro)services and Kong provides you with a way of managing them. This control is crucial, moreover, because “in microservices, being slow is the new down - if you’re slow, you’re down.” But that’s only half of the picture. This “new world” is still in development and transition with each organization following its own technological path. Kong is necessary because it supports and facilitates these unique transitions, all of them happening in different ways around the world. “Kong is a platform agnostic system that can run across different architectures, but most importantly it can run across different platforms,” Palladino says. “While we do work very well with Kubernetes, we also support… traditional legacy virtual machines or bare metal infrastructures. And the reason we do support both the modern and the old is that we’re working with enterprise organizations… [who] might be deploying new greenfield applications in Kubernetes or OpenShift but… still have a significant part of their software running in traditional virtual machines.” One of Kong’s strengths, Palladino suggests, is its pragmatism and the way in which the company is alive to their customer’s respective levels of technological maturity. “I’m very proud to say we’re a very pragmatic company. While we do work with developers to make sure that Kong is a leader in what we do in microservices and traditional API management, we’re also very pragmatic - we understand that’s the end goal, it's not necessarily the current state of affairs in our enterprise organizations.” Read next: It’s Black Friday: But what’s the business (and developer) cost of downtime? “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers.” Kong sees itself as a 'strategic technology partner' However, while every organization has its own timeline when it comes to technology, its CTO describes Kong as a platform that is paving a way for the future rather than simply catering to the needs of its customers. “We’re not an industry follower, we’re an industry leader,” says Palladino. “We’re looking at these large scale systems that organizations are creating and we’re thinking how can we make that better from a security standpoint, from a discoverability standpoint, from a documentation standpoint?” This isn’t just Silicon Valley posturing. As the software world moves toward cloud and microservices, the landscape shifts at a much faster rate. That makes it essential for organizations like Kong to pave the way for the future rather than react to the needs and demands of their customers. In turn, this means the scope of Kong’s work is growing. “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers,” says Palladino. “We engage with them, not just from a low-level standpoint with the teams, but we also engage... from a higher level executive standpoint, because we want to enable not just the technology but the business itself to be successful.” This is something Palladino is well aware of. Kong’s customers aren’t, after all, needlessly indulging in “an exercise in adopting new technologies,” but are rather doing so in response to business requirements. Having a more extensive relationship - or partnership, as Palladino puts it - ensures that digital transformation is both impactful and relatively risk free. "You simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software." Open source and the rise of bottom-up software adoption However, although Kong positions itself as a company attuned to the business needs of their customers, it’s also clear that it understands the developer’s power in today’s technology ecosystem. Palladino sees open source as playing a big part in this. And as an open core platform, Kong is able to build a community of creative and innovative developers around the wider product ecosystem. But Palladino is also keen to point out that you can’t really separate open source and the API and microservices revolutions. “10 years ago APIs used to be a nice-to-have” Palladino says. The approach was, he explains, little more than a kind of curiosity or a desire for a small community around a platform: “let’s open up some APIs, let’s open up this monolithic black box and see what happens.” However, “it’s not like that any more." If “APIs are the core business of every organization,” as Palladino puts it to me, “then you simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software.” “When we look at the microservices transition, we look at Docker, we look at Kubernetes, we look at Elastic, we look at Zipkin… Kafka… Kong, what’s the baseline? Open source. Each one of these products is open source at their core. Open source is driving this new transformation within the enterprise,” says Palladino. Palladino continues on this, offering a compelling narrative of why open source has become the dominant form of software. He begins with the problems posed by traditional central IT, “an ivory tower far from the business, far from real usage” which consequently “were not able to iterate fast enough to be able to answer those business requirements.” “The teams building the apps were closer to the business, closer to the customer, and they had to pick the right solution in order to be successful. And so what these… teams did was to go into self-service ecosystems - like... CNCF [Cloud Native Computing Foundation] - and pick and choose open source technologies they could adopt without having to go through an enterprise process… that’s why open source became more important - because it allowed them to be in production and get business value without having to deal with the bureaucracy of central IT - so it’s a bottom-up adoption from the teams all the way up as opposed from central IT to all the teams.” Developer freedom and organizational control Palladino refers to ‘bottom-up’ adoption a number of times throughout our conversation. He argues that it’s an industry shift that has been initiated by microservices. “With the emergence of microservices something happened in the industry - software, is not being sold top down anymore as much as it used to be - it’s more bottom-up adoption.” He also explains that having an open source element to the Kong offering is actually helping the company to grow. It’s a useful onboarding route. “Sometimes - often actually - Kong is being adopted just because the organization happens to already be running Kong in production, and you need enterprise features and enterprise support,” says Palladino. But while developer power seems to be part of this new post-central IT world, it also makes Kong even more valuable for those in leadership positions. Taking the example of multi-cloud, Palladino explains saying that “it’s very rare to see a CIO saying we would like to be multi cloud. Sometimes it happens, [but] most likely the organization is already multi-cloud because it naturally evolved to be multi-cloud. Different teams, different products using different clouds, different services.” With the wealth of tools, platforms and environments being used by forward-thinking developers trying to solve the problems in their immediate vicinity, it makes sense that the “C-Level Executives” who express an interest in Kong are looking for “a way to consolidate and standardize how their APIs and microservices are being managed and secured across multiple clouds, across multiple platforms.” A big concern for the leadership of the top Global 5000 organizations we’re working with… [is] making sure they can consolidate how security is being done, how monitoring is being done, how observability and enablement is being done across multiple clouds,” Palladino says. Read next: Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] The future of Kong and API management The future for Kong looks bright. The two new features released by the platform - Kong Brain and Kong Immunity - launched earlier this year, signal what the broader trends might be in the software infrastructure and systems engineering space. Both are backed by artificial intelligence, and provide incredibly cutting edge ways to manage the reliability and security of the services inside your infrastructure. Kong Brain, Palladino explains, lets you “listen to… runtime traffic to auto generate documentation for APIs… services, and monoliths” that organizations have no visibility on “after 20 years of running them in production.” Essentially then, it’s a tool that will prove incredibly useful in working with legacy software; it will certainly encourage the ‘lift and shift’ mentality that we’re starting to see emerge. Kong Immunity, meanwhile, is a security tool that uses machine learning to detect anomalies in traffic - allowing users to identify security threats and breaches within their system. “Traditional web application firewalls… don’t work within east-west traffic [server to server],” Palladino says. “They work perhaps in north-south traffic [client to server], but they’re slow, they’re very heavy weight.” Kong, then “tries to take away that security concern by providing a machine learning platform that can asynchronously, with no performance loss, learn from existing traffic across every version of every microservice.” With releases like these, it’s hard to dispute Palladino’s assertion that Kong is indeed an ‘industry leader.’ However, as Palladino also appears to be aware of, to be truly successful, it’s not enough to just lead the industry - you have to make sure you can bring people with you. Learn more about Kong here, and follow Marco Palladino on Twitter.
Read more
  • 0
  • 0
  • 4898

article-image-why-switch-to-angular-for-web-development
Amarabha Banerjee
09 Apr 2018
10 min read
Save for later

Why switch to Angular for web development - Interview with Minko Gechev

Amarabha Banerjee
09 Apr 2018
10 min read
Angular is one of the most popular JavaScript frameworks. Today, it's jostling for position with React and Vue as the leading front-end development framework. But if you're not using it, you're probably wondering why you should switch to Angular at all. What makes Angular such a good web development framework? And what does it give you that the likes of Vue and React don't? To help you decide if you should switch to Angular, we spoke to Minko Gechev, author of the third edition of Switching to Angular. Minko has worked incredibly closely with the Angular project - he was the author of the Angular 2 style guide. That means he's well placed to explain the benefits of Angular and to help you decide if its the right framework for you. Key Takeaways: Angular as against the public opinion has a consistent development lifecycle and is trustworthy for the developers to start their career with. The primary goal of the Angular community is to make the framework lighter and much more efficient. The book Switching to Angular - Third Edition works as a perfect stepping stone for new developers looking to start their journey with Angular. Angular.js and Angular are two different frameworks. Often people incorrectly refer to AngularJS as Angular version 1 which makes developers think that Angular is the next version of AngularJS. Although similar in nature, they should never be considered as predecessor and successor.   How has Angular come to dominate front-end development? What do you think is the primary reason for Angular to have the lion's share of front-end development market? Minko Gechev: There are several important reasons: Angular is a really well-designed framework which provides entire solution for the developers working on the user interface. The framework is developed and maintained by Google. This means that it is very well engineered and a high-quality solution. Angular provides great development experience. Since the beginning Angular was designed to simplify the development process dramatically. The framework provides friendly APIs which makes the process of building complex user interface trivial. Out of the box, Angular has very performant and well tested implementations of two-way data-binding, dependency injection and change detection. Notice that these will not only reduce the boilerplates in the process of application development but also make our code more testable, thanks to dependency injection. This means that the Angular team also thinks about improving the quality characteristics of our applications. On top of that Angular for web development provides a lot of external modules maintained by the Angular team itself, so the same team which works on Angular core, also develops Angular router, Angular forms, Angular material, etc. This allows us to confidently use the entire stack without bothering that the individual pieces of the puzzle may not fit well together. Another important fact is that the framework is developed by Google. This implies a high-quality engineering power for enterprises and individuals who can confidently bet on Angular as a long-term solution for development framework. As it is unlikely from Google to drop support for the framework unexpectedly. Keeping in mind that Angular is used in tonnes of projects inside Google itself, we can rest assured that it’s being tested on a very high-scale constantly. Google would never make a compromise in terms of quality. Development experience is something I am really interested in. The framework is developed with TypeScript. It provides great tooling support implemented in different text editors and IDEs; with precise auto completion suggestions and warning about typing errors; developers’ productivity can increase a lot. On top of that, Angular’s templates are very convenient for static analysis. This helps programmers to build tools which ease the development process even further. A few examples of such tools are codelyzer and the Angular Language Service. Finally, the framework provides predictable release schedule following semantic versioning. We always know what changes to expect in the next release to plan accordingly. What would you like to tell your followers about the latest development in the Angular eco-system? MG: Google is constantly making Angular for web development smaller and faster. Each new release introduces better tooling which drops more unused code from the bundles and makes them smaller. The Angular compiler also brings plenty of optimizations which transform our applications to better  performant versions of themselves. At the same time, we can see very minor incompatibility changes across major releases. The Angular team does an amazing job in making sure that they improve the framework a lot behind the surface but for us, the developers, they leave the APIs unchanged. On the other hand, we have the Angular community which builds on top of the core. There are plenty of community tools improving our day-to-day development process. Also, there are plenty of high quality material which help us to get started with the framework. Should beginners use Angular? Would it be advisable for a newbie JavaScript developer to start using Angular or would you suggest any other framework? MG: I’d definitely encourage beginner JavaScript developers to give Angular a try. They will build good habits by using TypeScript and have less bugs in their code because of the language’s type system. Also, Angular CLI is a great starting point for bootstrapping a new project - no matter whether you’re a beginner or an expert. What makes this book a must have for the would be Angular developers? MG: The book provides a gentle introduction to all the framework’s concepts. In the first chapters, we focus on ideas rather than digging into unfamiliar code. By the time we start writing applications, readers already have a good understanding of the framework foundation so they can just start coding without any struggle. As a former member of the Angular mobile team and long-term contributor, I have revealed implementation details which will help readers get in-depth knowledge about how everything works under the hood. From my teaching experience, I’ve noticed different patterns which help people develop a deeper understanding of new concepts and use them in practice. The main challenges of making the switch to Angular What are the main challenges that anyone would face while shifting to Angular framework? MG: There might be some conceptual overhead in the beginning. The framework encourages us to use best practices which means there might be a lot of new concepts, such as dependency injection, two-way data-binding, observables and Ahead-of-Time compilation. It takes time and practice to digest everything but the good thing is that we can do that incrementally. In the book I provide a step-by-step approach - we start developing fully functional applications with the bare minimum and after that increase our expressiveness and architecture by introducing new ideas and concepts. [box type="info" align="" class="" width=""]Check out this post to learn how to build components using Angular.[/box] Is Angular's dominance fragile? Angular is going through a lot of changes presently, do you feel that these many changes can impact the developer retainability of a particular tool or tech, especially when there are frameworks like React and Vue breathing down the neck of Angular? MG: The incorrect perception about fragility in the Angular’s API is a result of the rewrite of AngularJS. Often people incorrectly refer to AngularJS as Angular version 1 which makes developers think that Angular is the next version of AngularJS. This is incorrect. Angular and AngularJS are completely different frameworks, the only thing in common is that Angular is inspired by AngularJS. This incorrect statement also makes people think that the changes which will happen between Angular version 5 and Angular version 6, for instance, will be as impactful as the changes which happened between Angular version 2 and AngularJS. This is wrong as well. In fact, there were almost no backwards incompatible changes between Angular version 2 and Angular version 5 which means that the API is under strict control and the migration process between major releases is always only related to update of the project’s dependencies’ versions. Regarding the other libraries and frameworks out there such as React and Vue, I see that they are introducing a lot of new ideas as well. There’s competition which is very healthy since it allows the entire ecosystem to constantly evolve. 3 key features of Angular Please share three of the most remarkable features that you feel makes Angular special. Why should people switch to Angular? The dependency injection mechanism of the framework which makes our code testable and enforces better separation of concerns. I’ve had to use other alternative technologies in the past and I can honestly say that this is the feature I missed a lot. The Angular compiler is another remarkable feature. It transforms our application to much more performant versions of itself. Finally, I like the Angular templates. They have plenty of good features and thanks to the great design the Angular compiler is ableto perform a variety of optimizations over our applications. Could another framework replace Angular? Do you in the near future see any framework replacing Angular in popularity and if yes then why? MG: The JavaScript landscape has been very dynamic for the past couple of years. Although Vue is built by independent developers, recently there’s a noticeable focus on frameworks developed and maintained by large companies, such as React and Angular. I think this trend will stay because it’s much safer to bet the future of your product on a framework supported by a big organization. This means it is less likely to have unexpectedly discontinued maintenance and support. With all the qualities that Angular has, I believe that the framework has a very bright future. It has been adopted by a lot of large companies - Microsoft, Capital One, NBA, VMware, Google and many others. [box type="info" align="" class="" width=""]Check out this post on 5 web development tools that will matter in 2018[/box] When Angular made the breaking change to version 2, everyone was divided in their opinion about whether it was the right step, what was your thought? MG: As I mentioned above, Google didn’t introduce breaking changes to version 2 - they created a brand new framework inspired by AngularJS. It’s incorrect to refer to Angular as Angular 2, Angular 4, or Angular 5, etc. It’s just Angular. There’s a new major release of the framework every 6 months. At the moment we’re at version 5; 5 months from now Angular will reach version 6, etc. The differences between Angular version 2 and Angular version 5 are insignificant, there were very minimal backwards incompatible changes. It’s pointless to refer to every new version of Angular by placing a number behind unless we’re describing any specific change which happened across releases. In “Switching to Angular” I’ve focused on the solid foundation of the framework which is unlikely to change in near future. At ng-conf 2017 I introduced a tool which can automatically migrate any Angular application between versions 2 and 4 which simplified the transition process even further. On the other hand, AngularJS (which is incorrectly called Angular 1) is still in use, although Google strongly encourages developers to migrate to Angular. How is this book a stepping stone for a new Angular web developer and what should they try their hands on next? MG: “Switching to Angular” provides a great starting point for any intermediate JavaScript developer who wants to learn Angular. In case the reader has prior experience in AngularJS the transition of their mindset from AngularJS to Angular will be extremely smooth because of the comparison between the two frameworks that is provided. The book also explains in detail what should be the expectations of the readers regarding the future of the framework. If you are interested to get started with Angular javascript framework Switching to Angular - Third Edition is the go-to book to align with.  
Read more
  • 0
  • 0
  • 4880

article-image-bo-weaver-on-cloud-security-skills-gap-and-software-development-in-2019
Guest Contributor
19 Jan 2019
6 min read
Save for later

Bo Weaver on Cloud security, skills gap, and software development in 2019

Guest Contributor
19 Jan 2019
6 min read
Bo Weaver, a Kali Linux expert shares his thoughts on the security landscape in the cloud. He also talks about the skills gap in the current industry and why hiring is a tedious process. He explains the pitfalls in software development and where the tech is heading currently. Bo, along with another Kali Linux expert Wolf Halton were also interviewed on why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about advantages and disadvantages for using Kali Linux for pentesting. We also asked them about what they think about pentesting in cybersecurity, in general. They have also talked about their stance about the role of pentesting in cybersecurity in their interview titled, “Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity” First is “The Cloud”.   I laugh and cry at this term.  I have a sticker on my laptop that says “There is no Cloud….  Only other people’s computers.”  Your data is sitting on someone else’s system along with other people’s data.  These other people also have access to this system. Sure security controls are in place but the security of “physical access” has been bypassed. [box type="shadow" align="" class="" width=""]You’re “in the box”.  One layer of security is now gone.  [/box] Also, your vendor has “FULL ACCESS” to your data in some cases.  How can you be sure what is going on with your data when it is in an unknown box in an unknown data center?  The first rule of security is “Trust No One”. Do you really trust Microsoft, Amazon, or Google? I sure don’t!!!  Having your data physically out of your company’s control is not a good idea. Yes, it is cheaper but what are your company and its digital property worth? The ‘real’ skills and hiring gap in tech For the knowledge and skills gap, I see the hiring process in this industry as the biggest gap.  The knowledge is out there. We now have schools that teach this field. When I started, there were no school courses.  You learned on your own. Since there is training, there are a lot of skilled people out there. But go looking for a job, and it is a nightmare.  IT doesn’t do the actual hiring these days. Either HR or a headhunting agency does the hiring. [box type="shadow" align="" class="" width=""]The people you talk to have no clue of what you really do.  They have a list of acronyms to look for that they have no clue about to match to your resume.  If you don’t have the complete list, you’re booted.[/box] If you don’t have a certain certification, you’re booted even if you’ve worked with the technology for 10 years.  Once, with my skill level, it took sending out over a thousand resumes and took over a year for me to find a job in the security field. The people you talk to have no clue of what you really do.  They have a list of acronyms to look for that they have no clue about to match to your resume.  If you don’t have the complete list, you’re booted. Also, when working in security, you can’t really talk about what you exactly did in your last job due to the NDA agreements with HR or a headhunter.  Writing these books is the first time I have been able to talk about what I do in detail since the network being breached is a lab network owned by myself.  XYZ bank would be really mad if I published their vulnerabilities and rightly so. In the US, most major networks are not managed by actual engineers but are managed by people with an MBA degree.  The manager has no clue of what they are actually managing. These managers are more worried about their department’s P&L statement than the security of the network.  In Europe, you find that the IT managers are older engineers that WORKED for years in IT and then moved to management. They fully understand the needs of a network and security. The trouble with software development In software development, I see a dumbing down of user interfaces.  This may be good for my 6-year-old grandson, but someone like me may want more access to the system.  I see developers change things just for the reason of “change”. Take Microsoft’s Ribbon in Office. Even after all these years, I find the ribbon confusing and hard to use.  At least, with Libre Office, they give you a choice between a ribbon and an old school menu bar. The changes in Gnome 3 from Gnome 2. This dumbing down and attempting to make a desktop usable for a tablet and a mouse totally destroyed the usability of their desktop.  What used to take 1 click now takes 4 clicks to do. [box type="shadow" align="" class="" width=""] A lot of developers these days have an “I am God and you are an idiot” complex.  Developers should remember that without an engineer, there would be no system for their code to run on.  Listen and work with engineers.[/box] Where do I see tech going?   Well, it is in everything these days and I don’t see this changing.  I never thought I would see a Linux box running a refrigerator or controlling my car, yet we do have them today.   Today, we can buy a system the size of a pack of cigarettes for less than $30.00 (Raspberry Pi) that can do more than a full-size server could do 10 years ago.  This is amazing. However, this is a two-edged sword when it comes to small, “cheap” devices. When you build a cheap device, you have to keep the costs down. For most companies building these devices, security is either non-existent or is an afterthought.  Yes, your whole network can be owned by first accessing that $30.00 IP camera with no security and moving on from there to eventually your Domain Controller. I know this works; I’ve done it several times. If you wish to further learn about tools which can improve your average in password acquisition, from hash cracking, online attacks, offline attacks, and rainbow tables to social engineering, the book Kali Linux 2018: Windows Penetration Testing - Second Edition is the go-to option for you. Author Bio Bo Weaver is an old school, ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on an R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990s and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint an Atlanta based security consulting company. Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM) Cloud computing trends in 2019 Jeff Weiner talks about technology implications on society, unconscious bias, and skill gaps: Wired 25
Read more
  • 0
  • 0
  • 4807
article-image-interview-tirthajyoti-sarkar-and-shubhadeep-roychowdhury-data-wrangling-with-python
Sugandha Lahoti
25 Oct 2018
7 min read
Save for later

“Data is the new oil but it has to be refined through a complex processing network” - Tirthajyoti Sarkar and Shubhadeep Roychowdhury [Interview]

Sugandha Lahoti
25 Oct 2018
7 min read
Data is the new oil and is just as crude as unrefined oil. To do anything meaningful - modeling, visualization, machine learning, for predictive analysis – you first need to wrestle and wrangle with data. We recently interviewed Dr. Tirthajyoti Sarkar and Shubhadeep Roychowdhury, the authors of the course Data Wrangling with Python. They talked about their new course and discuss why do data wrangling and why use Python to do it. Key Takeaways Python boasts of a large, broad library equipped with a rich set of modules and functions, which you can use to your advantage and manipulate complex data structures NumPy, the Python library for fast numeric array computations and Pandas, a package with fast, flexible, and expressive data structures are helpful in working with “relational” or “labeled” data. Web scraping or data extraction becomes easy and intuitive with Python libraries, such as BeautifulSoup4 and html5lib. Regex, the tiny, highly specialized programming language inside Python can create patterns that help match, locate, and manage text for large data analysis and searching operations Present interesting, interactive visuals of your data with Matplotlib, the most popular graphing and data visualization module for Python. Easily and quickly separate information from a huge amount of random data using Pandas, the preferred Python tool for data wrangling and modeling. Full Interview Congratulations on your new course ‘Data wrangling with Python’. What this course is all about? Data science is the ‘sexiest job’ of 21st century’ (at least until Skynet takes over the world). But for all the emphasis on ‘Data’, it is the ‘Science’ that makes you - the practitioner - valuable. To practice high-quality science with data, first you need to make sure it is properly sourced, cleaned, formatted, and pre-processed. This course teaches you the most essential basics of this invaluable component of the data science pipeline – data wrangling. What is data wrangling and why should you learn it well? “Data is the new Oil” and it is ruling the modern way of life through incredibly smart tools and transformative technologies. But oil from the rig is far from being usable. It has to be refined through a complex processing network. Similarly, data needs to be curated, massaged and refined to become fit for use in intelligent algorithms and consumer products. This is called “wrangling” and (according to CrowdFlower) all good data scientists spend almost 60-80% of their time on this, each day, every project. It generally involves the following: Scraping the raw data from multiple sources (including web and database tables), Inputing, formatting, transforming – basically making it ready for use in the modeling process (e.g. advanced machine learning), Handling missing data gracefully, Detecting outliers, and Being able to perform quick visualizations (plotting) and basic statistical analysis to judge the quality of your formatted data This course aims to teach you all the core ideas behind this process and to equip you with the knowledge of the most popular tools and techniques in the domain. As the programming framework, we have chosen Python, the most widely used language for data science. We work through real-life examples and at the end of this course, you will be confident to handle a myriad array of sources to extract, clean, transform, and format your data for further analysis or exciting machine learning model building. Walk us through your thinking behind how you went about designing this course. What’s the flow like? How do you teach data wrangling in this course? The lessons start with a refresher on Python focusing mainly on advanced data structures, and then quickly jumping into NumPy and Panda libraries as fundamental tools for data wrangling. It emphasizes why you should stay away from traditional ways of data cleaning, as done in other languages, and take advantage of specialized pre-built routines in Python. Thereafter, it covers how using the same Python backend, one can extract and transform data from a diverse array of sources - internet, large database vaults, or Excel financial tables. Further lessons teach how to handle missing or wrong data, and reformat based on the requirement from a downstream analytics tool. The course emphasizes learning by real example and showcases the power of an inquisitive and imaginative mind primed for success. What other tools are out there? Why do data wrangling with Python? First, let us be clear that there is no substitute for the data wrangling process itself. There is no short-cut either. Data wrangling must be performed before the modeling task but there is always the debate of doing this process using an enterprise tool or by directly using a programming language and associated frameworks. There are many commercial, enterprise-level tools for data formatting and pre-processing, which does not involve coding on the part of the user. Common examples of such tools are: General purpose data analysis platforms such as Microsoft Excel (with add-ins) Statistical discovery package such as JMP (from SAS) Modeling platforms such as RapidMiner Analytics platforms from niche players focusing on data wrangling such as – Trifacta, Paxata, Alteryx At the end of the day, it really depends on the organizational approach whether to use any of these off-the-shelf tools or to have more flexibility, control, and power by using a programming language like Python to perform data wrangling. As the volume, velocity, and variety (three V’s of Big Data) of data undergo rapid changes, it is always a good idea to develop and nurture significant amount of in-house expertise in data wrangling. This is done using fundamental programming frameworks so that an organization is not betrothed to the whims and fancies of any particular enterprise platform as a basic task as data wrangling. Some of the obvious advantages of using an open-source, free programming paradigm like Python for data wrangling are: General purpose open-source paradigm putting no restriction on any of the methods you can develop for the specific problem at hand Great eco-system of fast, optimized, open-source libraries, focused on data analytics Growing support to connect Python for every conceivable data source types, Easy interface to basic statistical testing and quick visualization libraries to check data quality Seamless interface of the data wrangling output to advanced machine learning models – Python is the most popular language of choice of machine learning/artificial intelligence these days. What are some best practices to perform data wrangling with Python? Here are five best practices that will help you out in your data wrangling journey with Python. And in the end, all you’ll have is clean and ready to use data for your business needs. Learn the data structures in Python really well Learn and practice file and OS handling in Python Have a solid understanding of core data types and capabilities of Numpy and Pandas Build a good understanding of basic statistical tests and a panache for visualization Apart from Python, if you want to master one language, go for SQL What are some misconceptions about data wrangling? Though data wrangling is an important task, there are certain myths associated with data wrangling which developers should be cautious of. Myths such as: Data wrangling is all about writing SQL query Knowledge of statistics is not required for data wrangling You have to be a machine learning expert to do great data wrangling Deep knowledge of programming is not required for data wrangling Learn in detail about these misconceptions. We hope that these misconceptions would help you realize that data wrangling is not as difficult as it seems. Have fun wrangling data! About the authors Dr. Tirthajyoti Sarkar works as a Sr. Principal Engineer in the semiconductor technology domain where he applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup. He holds a Master Degree in Computer Science from West Bengal University Of Technology and certifications in Machine Learning from Stanford. 5 best practices to perform data wrangling with Python 4 misconceptions about data wrangling Data cleaning is the worst part of data analysis, say data scientists
Read more
  • 0
  • 0
  • 4752

article-image-qlik-sense-driving-self-service-business-intelligence
Amey Varangaonkar
12 Dec 2017
11 min read
Save for later

How Qlik Sense is driving self-service Business Intelligence

Amey Varangaonkar
12 Dec 2017
11 min read
Delivering Business Intelligence solutions to over 40000 customers worldwide, there is no doubt that Qlik has established a strong foothold in the analytics market for many years now. With the self-service capabilities of Qlik Sense, you can take better and more informed decisions than ever before. From simple data exploration to complex dashboarding and cloud-ready, multi-platform analytics, Qlik Sense gives you the power to find crucial, hidden insights from the depths of your data. We got some fascinating insights from our interview with two leading Qlik community members, Ganapati Hegde and Kaushik Solanki, on what Qlik Sense offers to its users and what the future looks like for the BI landscape. [box type="shadow" align="" class="" width=""] Ganapati Hegde Ganapati is an engineer by background and carries an overall IT experience of over 16 years. He is currently working with Predoole Analytics, an award-winning Qlik partner in India, in the presales role. He has worked on BI projects in several industry verticals and works closely with customers, helping them with their BI strategies. His experience in other aspects of IT, like application design and development, cloud computing, networking, and IT Security - helps him design perfect BI solutions. He also conducts workshops on various technologies to increase user awareness and drive their adoption. Kaushik Solanki Kaushik has been a Qlik MVP (Most Valuable Player) for the years 2016 and 2017 and has been working with the Qlik technology for more than 7 years now. An Information technology engineer by profession, he also holds a master’s degree in finance. Having started his career as a Qlik developer, Kaushik currently works with Predoole Analytics as the Qlik Project Delivery Manager and is also a certified QlikView administrator. An active member of Qlik community, his great understanding of project delivery - right from business requirement to final implementation, has helped many businesses take valuable business decisions.[/box] In this exciting interview, Ganapati and Kaushik take us through a compelling journey in self-service analytics, by talking about the rich features and functionalities offered by Qlik Sense. They also talk about their recently published book ‘Implementing Qlik Sense’ and what the readers can learn from it. Key Takeaways With many self-service and guided analytics features, Qlik Sense is perfectly tailored to business users Qlik Sense allows you to build customized BI solutions with an easy interface, good mobility, collaboration, focus on high performance and very good enterprise governance Built-in capabilities for creating its own warehouse, a strong ETL layer and a visualization layer for creating intuitive Business Intelligence solutions are some of the strengths of Qlik Sense With support for open APIs, the BI solutions built using Qlik Sense can be customized and integrated with other applications without any hassle. Qlik Sense is not a rival to Open Source technologies such as R and Python. Qlik Sense can be integrated with R or Python to perform effective predictive analytics ‘Implementing Qlik Sense’ allows you to upgrade your skill-set from a Qlik developer to a Qlik Consultant. The end goal of the book is to empower the readers to implement successful Business Intelligence solutions using Qlik Sense. Complete Interview There has been a significant rise in the adoption of Self-service Business Intelligence across many industries. What role do you think visualization plays in self-service BI? In a vast ocean of self-service tools, where do you think Qlik stands out from the others? As Qlik says visualization alone is not the answer. A strong backend engine is needed which is capable of strong data integration and associations. This then enables businesses to perform self-service and get answers to all their questions. Self-service plays an important role in the choice of visualization tools, as business users today no longer want to go to IT every time they need changes. Self service enable business users to quickly build their own visualization with simple drag and drop.   Qlik stands out from the rest in its capability to bring in multiple data sources, enabling users to easily answers questions. Its unique associative engine allows users to find hidden insights. The open API allows easy customization and integrations which is a must for enterprises. Data security and governance is one of the best in Qlik. What are the key differences between QlikView and Qlik Sense? What are the factors crucial to building powerful Business Intelligence solutions with Qlik Sense? QlikView and Qlik Sense are similar yet different. Both share the same engine. On one hand, QlikView is a developer’s delight with the options it offers, and on the other hand, Qlik Sense with its self-service is more suited for business users. Qlik Sense has better mobility and open API as compared to QlikView, making Qlik Sense more customizable and extensible. The beauty of Qlik Sense lies in its ability to help business get answers to their questions. It helps correlate the data between different data sources and making it very meaningful to users. Powerful data visualizations do not necessarily mean beautiful visualizations and Qlik Sense lays special emphasis on this. Finally what the users need is performance, easy interface, good mobility, collaboration and good enterprise governance - something which Qlik Sense provides. Ganapati, you have over 15 years of experience in IT, and have extensively worked in the BI domain for many years. Please tell us something about your journey. How does your daily schedule look like? I have been fortunate in my career to be able to work on multiple technologies ranging from programming, databases, information security, integrations and cloud solutions. All this knowledge is helping me propose the best solutions for my Qlik customers. It’s a pleasure helping customers in their analytical journey and working for a services company helps in meeting customers from multiple domains. The daily schedule involves doing Proof of Concepts/Demos for customers, designing optimum solutions on Qlik, and conducting requirement gathering workshops. It’s a pleasure facing new challenges every day and this helps me increase my knowledge base. Qlik open API opens up amazing new possibilities and lets me come up with out of the box solutions. Kaushik, you have been awarded the Qlik MVP for 2016 and 2017, and have experience of using Qlik's tools for over 7 years. Please tell us something about your journey in this field. How do you use the tool in your day to day work? I started my career by working with the Qlik technology. My hunger for learning Qlik made me addicted to the Qlik community. I learned lot many things from the community by asking questions and solving real-world problems of community members. This helped me to get awarded by Qlik as MVP for consecutively 2 years. MVP award motivated me to help Qlik customers and users and that is one of the reasons why I thought about writing a book on Qlik Sense. I have implemented Qlik not only for clients but also for my personal use cases. There are many ways in which Qlik helps me in my day-to-day work and makes my life much easier. It’s safe to say that I absolutely love Qlik. Your book 'Implementing Qlik Sense' is primarily divided into 4 sections - with each section catering to a specific need when it comes to building a solid BI solution. Could you please talk more about how you have structured the book, and why? BI projects are challenging, and it really hurts when a project doesn’t succeed. The purpose of the book is to enable Qlik Sense developers to get to implement successful Qlik Projects. There is often a lot of focus on development and thereby Qlik developers miss several other crucial factors which contribute to project success. To make the journey from a Qlik developer to a Qlik consultant the book is divided into 4 sections. The first section focuses on the initial preparation and intended to help consultant to get their groundwork done. The second section focuses on the execution of the project and intended to help consultants play a key role in rest of phases involving requirement gathering, architecture, design, development UAT. The third section is intended to make consultant familiar with some industry domains. This section is intended to help consultant in engaging better with business users and suggesting value-additions to project. The last section is to use the knowledge gained in the three sections and approaching a project with a case study which we come across routinely. Who is the primary target audience for this book? Are there any prerequisites they need to know before they start reading this book? The primary target audience is the Qlik Developers who are looking to progress in their career and are looking to wear the hat of a Qlik consultant.  The book is also for existing consultants who would like to sharpen their skills and use Qlik Sense more efficiently. The book will help them become trusted advisors to their clients. Those who are already familiar with some Qlik development will be able to get the most out of this book.   Qlik Sense is primarily an enterprise tool. With the rise of open source languages such as R and Python, why do you think people would still prefer enterprise tools for their data visualization? Qlik Sense is not a competition to R and Python but there are lots of synergies. The customer gets the best value when Qlik co-exists with R/Python and can leverage the capabilities of both Qlik and R/Python. Qlik Sense does not have the predictive capability which is easily fulfilled by R/Python. For the customer, the tight integration ensures he/she doesn’t have to leave the Qlik screen. There can be other use cases for using them jointly such as analyzing unstructured data and using machine learning. The reports and visualizations built using Qlik Sense can be viewed and ported across multiple platforms. Can you please share your views on this? How does it help the users? Qlik has opened all gates to integrate its reporting and visualization with most of the technologies through APIs. This has empowered customers to integrate Qlik with their existing portals and provide easy access to end users.  Qlik provides APIs for almost all its products, which makes Qlik the first choice for many CIOs because with those APIs they get a variety of options to integrate and automate their work. What are the other key functionalities of Qlik Sense that help the users build better BI solutions? Qlik Sense is not just a pure play data visualization tool. It has capabilities for creating its own warehouse, having an ETL layer and then of course there’s the visualization layer. For the customers, it’s all about getting all the relevant components required for their BI project in a single solution. Qlik is investing heavily in R&D and with its recent acquisitions and a strong portfolio, it is a complete solution enabling users to get all their use cases fulfilled. The open API has enabled opening newer avenues with custom visualizations, amazing concepts such as chatbots, augmented intelligence and much more. The core strength of strong data association, enterprise scalability, governance combined with all other aspects make Qlik one of the best in overall customer satisfaction. Do you foresee Qlik Sense competing strongly with major players such as Tableau and Power BI in the near future? Also, how do you think Qlik plans to tackle the rising popularity of the Open Source alternatives? Qlik has been classified as a Leader in the Gartner’s Magic Quadrant for several years now. We often come across Tableau and Microsoft Power BI as competition. We suggest our customers do a thorough evaluation and more often than not they choose Qlik for its features and the simplicity it offers. With recent acquisitions, Qlik Sense has now become an end-to-end solution for BI, covering uses cases ranging from report distributions, data-as-a-service, and geoanalytics as well. Open source alternatives have their own market and it makes more sense to leverage their capability rather than compete with them. An example, of course, is the strong integration of many BI tools with R or Python which makes life so much easier when it comes to finding useful insights from data. Lastly, what are the 3 key takeaways from your book 'Implementing Qlik Sense'? How will this book help the readers? The book is all about meeting your client’s expectations. The key takeaways are: Understand the role and  importance of Qlik consultant and why it’s crucial to be a trusted advisor to your clients Successfully navigating through all aspects which enable successful implementation of your Qlik BI Project. Focus on mitigating risks, driving adoption and avoiding common mistakes while using Qlik Sense. The book is ideal for Qlik developers who aspire to become Qlik consultants. The book uses simple language and gives examples to make the learning journey as simple as possible. It helps the consultants to give equal importance to certain phases of project development that often neglected. Ultimately, the book will enable Qlik consultants to deliver quality Qlik projects. If this interview has nudged you to explore Qlik Sense, make sure you check out our book Implementing Qlik Sense right away!
Read more
  • 0
  • 0
  • 4710

article-image-tableau-powerful-analytics-platform-interview-joshua-milligan
Sunith Shetty
22 May 2018
9 min read
Save for later

“Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan

Sunith Shetty
22 May 2018
9 min read
Tableau is one of the leading BI tools used by data science and business intelligence professionals today. You can not only use it to create powerful data visualizations but also use it to extract actionable insights for quality decision making thanks to the plethora of tools and features it offers. We recently interviewed Joshua Milligan, a Tableau Zen Master and the author of the book, Learning Tableau. Joshua takes us on an insightful journey into Tableau explaining why it is the Google of data visualization. He tells us all about its current and future focus areas such as Geospatial analysis and automating workflows, the exciting new features and tools such as Hyper, Tableau Prep among other topics.  He also gives us a preview of things to come in his upcoming book. Author’s Bio Joshua Milligan, author of the bestselling book, Learning Tableau, has been with Teknion Data Solutions since 2004 and currently serves as a principal consultant.  With a strong background in software development and custom .NET solutions, he brings a blend of analytical and creative thinking to BI solutions. Joshua has been named Tableau Zen Master, the highest recognition of excellence from Tableau Software not once but thrice. In 2017, Joshua competed as one of three finalists in the prestigious Tableau Iron Viz competition. As a Tableau trainer, mentor, and leader in the online Tableau community, he is passionate about helping others gain insights from their data. His work has been featured multiple times on Tableau Public’s Viz of the Day and Tableau’s website. He also shares frequent Tableau (and Maestro) tips, tricks, and advice on his blog VizPainter.com. Key Takeaways Tableau is perfectly tailored for business intelligence professionals given its extensive list of offerings from data exploration to powerful data storytelling. The drag-and-drop interface allows you to understand data visually thus enabling anyone to perform and share self service data analytics with colleagues in seconds. Hyper is new in-memory data engine designed for powerful query analytical processing on complex datasets. Tableau Prep, a new data preparation tool released with Tableau 2018.1, allows users to easily combine, shape, analyze and clean the data for compelling analytics. Tableau 2018.1 is expected to bring new geospatial tools, enterprise enhancements to Tableau Server, and new extensions and plugins to create interactive dashboards. Tableau users can expect to see artificial intelligence and machine learning becoming major features in both Tableau and Tableau Prep - thus deriving insights based on users behavior across the enterprise. Full Interview There are many enterprise software for business intelligence, how does Tableau compare against the others? What are the main reasons for Tableau's popularity? Tableau's paradigm is what sets it apart from others. It's not just about creating a chart or dashboard. It's about truly having a conversation with the data: asking questions and seeing instant results as you drag and drop to get new answers that raise deeper questions and then iterating. Tableau allows for a flow of thought through the entire cycle of analytics from data exploration through analysis to data storytelling.  Once you understand this paradigm, you will flow with Tableau and do amazing things! There's a buzz in the developer's community that Tableau is the Google of data visualization. Can you list the top 3-5 features in Tableau 10.5 that are most appreciated by the community? How do you use Tableau in your day-to-day work? Tableau 10.5 introduced Hyper - a next-generation data engine that really lays a foundation for enterprise scaling as well as a host of exciting new features and Tableau 2018.1 builds on this foundation.  One of the most exciting new features is a completely new data preparation tool - Tableau Prep. Tableau Prep complements Tableau Desktop and allows users to very easily clean, shape, and integrate their data from multiple sources.  It’s intuitive and gives you a hands-on, instant feedback paradigm for data preparation in a similar way to what Tableau Desktop enables with data visualization. Tableau 2018.1 also includes new geospatial features that make all kinds of analytics possible.  I’m particularly excited about support for the geospatial data types and functions in SQL Server which have allowed me to dynamically draw distances and curves on maps.  Additionally, web authoring in Tableau Server is now at parity with Tableau Desktop. I use Tableau every day to help my clients see and understand their data and to make key decisions that drive new business, avoid risk, and find hidden opportunities.  Tableau Prep makes it easier to access the data I need and shape it according to the analysis I’ll be doing. Tableau offers a wide range of products to suit their users' needs. How does one choose the right product from their data analytics or visualization need? For example, what are the key differences between Tableau Desktop, Server and Public? Are there any plans for a unified product for the Tableau newbie in the near future? As a consultant at Teknion Data Solutions (a Tableau Gold Partner), I work with clients all the time to help them make the best decisions around which Tableau offering best meets their needs.  Tableau Desktop is the go-to authoring tool for designing visualizations and dashboards. Tableau Server, which can be hosted on premises or in the cloud, gives enterprises and organizations the ability to share and scale Tableau.  It is now at near parity with Tableau Desktop in terms of authoring. Tableau Online is the cloud-based, Tableau managed solution. Tableau Public allows for sharing public visualizations and dashboards with a world-wide audience. How good is Tableau for Self-Service Analytics / automating workflows? What are the key challenges and limitations? Tableau is amazing for this. Combined with the new data prep tool - Tableau Prep - Tableau really does offer users, across the spectrum (from business users to data scientists), the ability to quickly and easily perform self-service analytics. As with any tool, there are definitely cases which require some expertise to reach a solution. Pulling data from an API or web-based source or even sometimes structuring the data in just the right way for the desired analysis are examples that might require some know-how. But even there, Tableau has the tools that make it possible (for example, the web data connector) and partners (like Teknion Data Solutions) to help put it all together. In the third edition of Learning Tableau, I expand the scope of the book to show the full cycle of analytics from data prep and exploration to analysis and data storytelling. Expect updates on new features and concepts (such as the changes Hyper brings), a new chapter focused on Tableau Prep and strategies for shaping data to perform analytics, and new examples throughout that span multiple industries and common analytics questions. What is the development roadmap for Tableau 2018.1? Are we expecting major feature releases this year to overcome some of the common pain areas in business intelligence? I'm particularly excited about Tableau 2018.1. Tableau hasn't revealed everything yet, but things such as new geospatial tools and features, enterprise enhancements to Tableau Server, the new extensions API, new dashboard tools, and even a new visualization type or two look to be amazing! Tableau is working a lot in the geospatial domain coming up with new plugins/connectors and features. Can we expect Tableau to further strengthen their support for spatial data? What are the other areas/domains that Tableau is currently focused on? I couldn't say what the top 3-5 areas are - but you are absolutely correct that Tableau is really putting some emphasis on geospatial analytics.  I think the speed and power of the Hyper data engine makes a lot of things like this possible. Although I don't have any specific knowledge beyond what Tableau has publicly shared, I wouldn't be surprised to see some new predictive and statistical models and expansion of data preparation abilities. What's driving Tableau to Cloud? Can we expect more organizations adopting Tableau on Cloud? There has been a major shift to the cloud by organizations. The ability to manage, scale, ensure up-time, and save costs are driving this move and that in turn makes Tableau's cloud-based offerings very attractive. What does Tableau's future hold, according to you? For example, do you see machine learning and AI-powered analytics platform transformation? Or can we expect Tableau entering the IoT and IIoT domain? Tableau demonstrated a concept around NLQ at the Tableau Conference and has already started building in a few machine learning features. For example, Tableau now recommends joins based on what is  learns from behavior of users across the enterprise. Tableau Prep has been designed from the ground-up with machine learning in mind. I fully expect to see AI and machine learning become major features in both Tableau and Tableau Prep – but true to Tableau’s paradigm, they will complement the work of the analyst and allow for deeper insight without obscuring the role that humans play in reaching that insight.  I'm excited to see what is announced next! Give us a sneak peek into the book you are currently writing "Learning Tableau 2018.1, Third Edition", expected to be released in the 3rd Quarter this year. What should our readers get most excited about as they wait for this book? Although the foundational concepts behind learning Tableau remain the same, I'm excited about the new features that have been released or will be as I write.  Among these are a couple of game-changers such as the new geospatial features and the new data prep tool: Tableau Prep. In addition to updating the existing material, I'll definitely have a new chapter or two covering those topics! If you found this interview to be interesting, make sure you check out other insightful articles on business intelligence: Top 5 free Business Intelligence tools [Opinion] Tableau 2018.1 brings new features to help organizations easily scale analytics [News] Ride the third wave of BI with Microsoft Power BI [Interview - Part 1] Unlocking the secrets of Microsoft Power BI [Interview - Part 2] How Qlik Sense is driving self-service Business Intelligence [Interview]
Read more
  • 0
  • 0
  • 4647
article-image-listen-ux-designer-will-grant-explains-why-good-design-probably-cant-save-the-world-podcast
Richard Gall
18 Mar 2019
2 min read
Save for later

Listen: UX designer Will Grant explains why good design probably can't save the world [Podcast]

Richard Gall
18 Mar 2019
2 min read
UX designer has become a popular job role with tech recruiters, anxious to give roles a little extra sparkle and some additional sex appeal. But has UX become inflated as a term? Is its value being diluted? Although paying close attention to the experience of users can only be a good thing, are we doing a disservice to the discipline by treating it as a buzzword or a fad? If we pretend something's sexy, how serious can we really be about it? Whatever the problems with the uses and abuses of UX today, a landscape characterized by dark patterns and digital detox is one that's certainly not that comfortable for users. That means UX design is arguably more important than ever. What UX design is... and what it isn't To get to the heart of what UX design is, as well as what it isn't, we spoke to Will Grant (@wgx) a UX Designer who has experience working with a range of clients on products that have found their way into the lives of millions of users around the world. Will is the author of 101 UX Principles, a definitive design guide that explores key issues in the field.  In the podcast episode, we discussed: What UX is and isn't The UX process - what UX designers actually do The key skills a UX designer needs Originality v. templating Whether developers need to write code What conversational UI means for UX Can good design really save the world? Or should we quit the bullshit? Listen here: https://soundcloud.com/packt-podcasts/can-good-design-really-save-the-world-will-grant-on-the-importance-of-ux-in-2019 Read next: Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 4643

article-image-instead-of-data-scientists-working-on-their-models-and-advancing-ai-they-are-spending-their-time-doing-deepops-work-missinglink-ceo-yogi-taguri-interview
Amey Varangaonkar
08 Nov 2018
10 min read
Save for later

“Instead of data scientists working on their models and advancing AI, they are spending their time doing DeepOps work”, MissingLink CEO, Yosi Taguri [Interview]

Amey Varangaonkar
08 Nov 2018
10 min read
Machine learning has shown immense promise across domains and industries over the recent years. From helping with the diagnosis of serious ailments to powering autonomous vehicles, machine learning is finding useful applications across a spectrum of industries. However, the actual process of delivering business outcomes using machine learning currently takes too long and is too expensive, forcing some businesses to look for other less burdensome alternatives. MissingLink.ai is a recently-launched platform to fix just this problem. It enables data scientists to spend less time on the grunt work by automating and streamlining the entire machine learning cycle, giving them more time to apply actionable insights gleaned from the data. Key Takeaways Processing and managing the sheer volume of data is one of the key challenges that today’s AI tools face Yosi thinks the idea of companies creating their own machine learning infrastructure doesn’t make a lot of sense. Data professionals should be focusing on more important problems within their organizations by letting the platform take care of the grunt work. MissingLink.ai is an innovative AI platform born out of the need to simplify AI development, by taking away the common, menial data processing tasks from data scientists and allowing them to focus on the bigger data-related issues; through experiment management, data management and resource management. MissingLink is a part of the Samsung NEXT product development team that aims to help businesses automate and accelerate their projects using machine learning We had the privilege of interviewing Mr. Yosi Taguri, the founder and CEO of MissingLink, to know more about the platform and how it enables more effective deep learning. What are the biggest challenges that companies come across when trying to implement a robust Machine Learning/Deep Learning pipeline in their organization? How does it affect their business? The biggest challenge, simply put, is that today’s AI tools can’t keep up with the amount of data being produced. And it’s only going to get more challenging from here! As datasets continue to grow, they will require more and more compute power, which means we risk falling farther behind unless we change the tools we’re using. While everyone is talking about the promise of machine learning, the truth is that today, assessing data is still too time-consuming and too expensive. Engineers are spending all their time managing the sheer volume of data, rather than actually learning from it and being empowered to make changes. Let’s talk about MissingLink.ai, the platform you and your team have launched for accelerating deep learning across businesses. Why the name MissingLink? What was the motivation to launch this platform? The name is actually a funny story, and it ties pretty neatly into why we created the platform. When we were starting out three years ago, deep learning was still a relatively new concept and my team and I were working hard to master the intricacies of it. As engineers, we primarily worked with code, so to be able to solve problems with data was a fascinating new challenge for us. We quickly realized that deep learning is really hard and moves very, very slow. So we set out to solve that problem of how to build really smart machines really fast. By comparison, we thought of it through the lens of software development. Our goal was to accelerate from a glacial pace to building machine learning algorithms faster -- because we felt that there was something missing, a missing link if you will. MissingLink is a part of the growing Samsung NEXT product development team. How does it feel? What role do you think MissingLink will play in Samsung NEXT’s plans and vision going forward? Samsung NEXT’s broader mission is to help startups reach their full potential and achieve their goals. More specifically, Samsung NEXT discovers and backs the engineers, innovators, builders, and entrepreneurs who will help Samsung define the future of software and services. The Samsung NEXT product development team is focused on building software and services that take advantage of and accelerate opportunities related to some of the biggest shifts in technology including automation, supply and demand, and interfaces. This will require hardware and software to seamlessly come together. Over the past few years, nearly all startups are leveraging AI for some component of their business, yet practical progress has been slower than promised. MissingLink is a foundational tool to enable the progress of these big changes, helping entrepreneurs with great use cases for machine learning to accelerate their projects. Could you give us the key features of Missinglink.ai that make it stand out from the other AI platforms available out there? How will it help data scientists and ML engineers build robust, efficient machine learning models? First off, MissingLink.ai is the most comprehensive AI platform out there. It handles the entire deep learning lifecycle and all its elements, including code, data, experiments, and resources. I’d say that our top features include: Experiment Management: See and compare the entire history of experiments. MissingLink.ai auto-documents every aspect Data Management: A unique data store tracks data versions used in every experiment, streams data, caches it locally and only syncs changes Resources Management: Manages your resources with no extra infrastructure costs using your AWS or other cloud credentials. It grows and shrinks your cloud resources as needed. These features, together with our intuitive interface, really put data scientists and engineers in the driver's seat when creating AI models. Now they can have more control and spend less energy repeating experiments, giving them more time to focus on what is important. Your press release on the release of MissingLink states “the actual process of delivering business outcomes currently takes too long and it is too expensive. MissingLink.ai was born out of a desire to fix that.” Could you please elaborate on how MissingLink makes deep learning less expensive and more accessible? Companies are currently spending too much time and devoting too many resources to the menial tasks that are necessary for building machine learning models. The more time data scientists spend on tasks like spinning machines, copying files and DevOps, the more money that a company is wasting. MissingLink changes that through the introduction of something we’re calling DeepOps or deep learning operations, which allows data scientists to focus on data science and let the machine take care of the rest. It’s like DevOps where the role is about how to make the process of software development more efficient and productionalized, but the difference is no one has been filling this role and it’s different enough that its very specific to the task of deep learning. Today, instead of data scientists working on their models and advancing AI, they are spending their time doing this DeepOps work. MissingLink reduces load time and facilitates easy data exploration by eliminating the need to copy files through data-management in a version-aware data store. Most of the businesses are moving their operations on to the cloud these days, with AWS, Azure, GCP, etc. being their preferred cloud solutions. These platforms have sophisticated AI offerings of their own. Do you see AI platforms such as MissingLink.ai as a competition to these vendors, or can the two work collaboratively? I wouldn’t call cloud companies our competitors; we don’t provide the cloud services they do, and they don’t provide the DeepOps service that we do. Yes, we all are trying to simplify AI, but we’re going about it in very different ways. We can actually use a customer’s public cloud provider as the infrastructure to run the MissingLink.ai platform. If customers provide us with their cloud credentials, we can even manage this for them directly. Concepts such as Reinforcement Learning and Deep Learning for Mobile are getting a lot of traction these days, and have moved out of the research phase into the application/implementation phase. Soon, they might start finding extensive business applications as well. Are there plans to incorporate these tools and techniques in the platform in the near future? We support all forms of Deep Learning, including Reinforcement Learning. On the Deep Learning for Mobile side, we think the Edge is going to be a big thing as more and more developers around the world are exposed to Deep Learning. We plan to support it early next year. Currently, data privacy and AI ethics have become a focal point of every company’s AI strategy. We see tech conglomerates increasingly coming under the scanner for ignoring these key aspects in their products and services. This is giving rise to an alternate movement in AI, with privacy and ethics-centric projects like Deon, Vivaldi, and Tim Berners-Lee’s Solid. How does MissingLink approach the topics of privacy, user consent, and AI ethics? Are there processes/tools or principles in place in the MissingLink ecosystem or development teams that balance these concerns? When we started MissingLink we understood that data is the most sensitive part of Deep Learning. It is the new IP. Companies spend 80% of their time attending to data, refining it, tagging it and storing it, and therefore are reluctant to upload it to a 3rd party vendor. We have built MissingLink with that in mind - our solution allows customers to simply point us in the direction of where their data is stored internally, and without moving it or having to access it as a SaaS solution we are able to help them manage it by enabling version management as they do with code. Then we can stream it directly to the machines that need the data for processing and document which data was used for reproducibility. Finally, where do you see machine learning and deep learning heading in the near future? Do you foresee a change in the way data professionals work today? How will platforms like MissingLink.ai change the current trend of working? Right now, companies are creating their own machine learning infrastructure - and that doesn’t make sense. Data professionals can and should be focusing on more important problems within their organizations. Platforms like MissingLink.ai free data scientists from the grunt work it takes to upkeep the infrastructure, so they can focus on bigger picture issues. This is the work that is not only more rewarding for engineers to work on, but also creates the unique value that companies need to compete.  We’re excited to help empower more data professionals to focus on the work that actually matters. It was wonderful talking to you, and this was a very insightful discussion. Thanks a lot for your time, and all the best with MissingLink.ai! Read more Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Tesseract version 4.0 releases with new LSTM based engine, and an updated build system Baidu releases a new AI translation system, STACL, that can do simultaneous interpretation
Read more
  • 0
  • 0
  • 4613