Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

121 Articles
article-image-bo-weaver-on-cloud-security-skills-gap-and-software-development-in-2019
Guest Contributor
19 Jan 2019
6 min read
Save for later

Bo Weaver on Cloud security, skills gap, and software development in 2019

Guest Contributor
19 Jan 2019
6 min read
Bo Weaver, a Kali Linux expert shares his thoughts on the security landscape in the cloud. He also talks about the skills gap in the current industry and why hiring is a tedious process. He explains the pitfalls in software development and where the tech is heading currently. Bo, along with another Kali Linux expert Wolf Halton were also interviewed on why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about advantages and disadvantages for using Kali Linux for pentesting. We also asked them about what they think about pentesting in cybersecurity, in general. They have also talked about their stance about the role of pentesting in cybersecurity in their interview titled, “Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity” First is “The Cloud”.   I laugh and cry at this term.  I have a sticker on my laptop that says “There is no Cloud….  Only other people’s computers.”  Your data is sitting on someone else’s system along with other people’s data.  These other people also have access to this system. Sure security controls are in place but the security of “physical access” has been bypassed. [box type="shadow" align="" class="" width=""]You’re “in the box”.  One layer of security is now gone.  [/box] Also, your vendor has “FULL ACCESS” to your data in some cases.  How can you be sure what is going on with your data when it is in an unknown box in an unknown data center?  The first rule of security is “Trust No One”. Do you really trust Microsoft, Amazon, or Google? I sure don’t!!!  Having your data physically out of your company’s control is not a good idea. Yes, it is cheaper but what are your company and its digital property worth? The ‘real’ skills and hiring gap in tech For the knowledge and skills gap, I see the hiring process in this industry as the biggest gap.  The knowledge is out there. We now have schools that teach this field. When I started, there were no school courses.  You learned on your own. Since there is training, there are a lot of skilled people out there. But go looking for a job, and it is a nightmare.  IT doesn’t do the actual hiring these days. Either HR or a headhunting agency does the hiring. [box type="shadow" align="" class="" width=""]The people you talk to have no clue of what you really do.  They have a list of acronyms to look for that they have no clue about to match to your resume.  If you don’t have the complete list, you’re booted.[/box] If you don’t have a certain certification, you’re booted even if you’ve worked with the technology for 10 years.  Once, with my skill level, it took sending out over a thousand resumes and took over a year for me to find a job in the security field. The people you talk to have no clue of what you really do.  They have a list of acronyms to look for that they have no clue about to match to your resume.  If you don’t have the complete list, you’re booted. Also, when working in security, you can’t really talk about what you exactly did in your last job due to the NDA agreements with HR or a headhunter.  Writing these books is the first time I have been able to talk about what I do in detail since the network being breached is a lab network owned by myself.  XYZ bank would be really mad if I published their vulnerabilities and rightly so. In the US, most major networks are not managed by actual engineers but are managed by people with an MBA degree.  The manager has no clue of what they are actually managing. These managers are more worried about their department’s P&L statement than the security of the network.  In Europe, you find that the IT managers are older engineers that WORKED for years in IT and then moved to management. They fully understand the needs of a network and security. The trouble with software development In software development, I see a dumbing down of user interfaces.  This may be good for my 6-year-old grandson, but someone like me may want more access to the system.  I see developers change things just for the reason of “change”. Take Microsoft’s Ribbon in Office. Even after all these years, I find the ribbon confusing and hard to use.  At least, with Libre Office, they give you a choice between a ribbon and an old school menu bar. The changes in Gnome 3 from Gnome 2. This dumbing down and attempting to make a desktop usable for a tablet and a mouse totally destroyed the usability of their desktop.  What used to take 1 click now takes 4 clicks to do. [box type="shadow" align="" class="" width=""] A lot of developers these days have an “I am God and you are an idiot” complex.  Developers should remember that without an engineer, there would be no system for their code to run on.  Listen and work with engineers.[/box] Where do I see tech going?   Well, it is in everything these days and I don’t see this changing.  I never thought I would see a Linux box running a refrigerator or controlling my car, yet we do have them today.   Today, we can buy a system the size of a pack of cigarettes for less than $30.00 (Raspberry Pi) that can do more than a full-size server could do 10 years ago.  This is amazing. However, this is a two-edged sword when it comes to small, “cheap” devices. When you build a cheap device, you have to keep the costs down. For most companies building these devices, security is either non-existent or is an afterthought.  Yes, your whole network can be owned by first accessing that $30.00 IP camera with no security and moving on from there to eventually your Domain Controller. I know this works; I’ve done it several times. If you wish to further learn about tools which can improve your average in password acquisition, from hash cracking, online attacks, offline attacks, and rainbow tables to social engineering, the book Kali Linux 2018: Windows Penetration Testing - Second Edition is the go-to option for you. Author Bio Bo Weaver is an old school, ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on an R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990s and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint an Atlanta based security consulting company. Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM) Cloud computing trends in 2019 Jeff Weiner talks about technology implications on society, unconscious bias, and skill gaps: Wired 25
Read more
  • 0
  • 0
  • 4792

article-image-security-experts-wolf-halton-and-bo-weaver-discuss-pentesting-and-cybersecurity-interview
Guest Contributor
18 Jan 2019
4 min read
Save for later

Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity [Interview]

Guest Contributor
18 Jan 2019
4 min read
This is Part 2 of the interview with our two Kali Linux experts, Wolf Halton, and Bo Weaver, on using Kali Linux for pentesting. In their section, we talk about the role of pentesting in cybersecurity. Previously, the authors talked about why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about the advantages and disadvantages for using Kali Linux for pentesting. There also talked about their love for the Kali platform. Wolf says, “Kali is a stable platform, based upon a major distribution with which I am very familiar.  There are over 400 security tools in the Kali repos, and it can also draw directly from the Debian Testing repos for even more tools.” Here are a few more questions, we asked them about what they think about pentesting in cybersecurity, in general. Can you tell us about the role of pentesting in cybersecurity? According to you, how has pentesting improved over the years? Bo Weaver: For one thing, pentesting has become an accepted and required practice in network security.  I do remember the day when the attitude was “It can’t happen here. so why should you break into my network?  Nobody else is going to.” Network security, in general, wasn’t even thought by most companies and spending money on network security was seen as a waste. The availability of tools has also grown in leaps and bounds.  Also, the availability of documentation on vulnerabilities and exploits has grown, and the awareness in the industry of the importance of network security has grown. Wolf Halton: The tools have gotten much more powerful and easier to use.  A pentester will still be more effective if they can craft their own exploits, but they can now craft it in an environment of shared libraries such as Metasploit, and there are stable pentesting platforms like Kali Linux Rolling (2018) that reduces the learning curve to being an effective pentester. Pentesting is rising as a profession along with many other computer-security roles.  There are compliance requirements to do penetration tests at least annually or when a network is changed appreciably. What aspects of pentesting do you feel are tricky to get past? What are the main challenges that anyone would face? Bo Weaver: Staying out of jail.  Laws can be tricky. You need to know and fully understand all laws pertaining to network intrusion for both, the State you are working in and the Federal laws.  In pen testing, you are walking right up to the line of right and wrong and hanging your toes over that line a little bit. You can hang your toes over the line but DON’T CROSS IT!  Not only will you go to jail but you will never work in the security field again unless it is in some dark corner of the NSA. [box type="shadow" align="" class="" width=""]Never work without a WRITTEN waiver that fully contains the “Rules of Engagement” and is signed by the owner or “C” level person of the company being tested.[/box] Don’t decide to test your bank’s website even if your intent is for good.  If you do find a flaw and report it, you will not get a pat on the back but will most likely be charged for hacking.  Especially banks get real upset when people poke at their networks. Yes, some companies offer Bug Bounty programs. These companies have Rules of Engagement posted on their site along with a waiver to take part in the program.  Print this and follow the rules laid out. Wolf Halton: Staying on the right side of the law.  Know the laws that govern your profession, and always know your customer.  Have a hard copy of an agreement that gives you permission to test a network.  Attacking a network without written permission is a felony and might reduce your available career paths. Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Bo Weaver is an old school ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on a R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990's and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint a Atlanta based security consulting company. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 4891

article-image-kali-linux-2018-for-testing-and-maintaining-windows-security-wolf-halton-and-bo-weaver-interview
Guest Contributor
17 Jan 2019
9 min read
Save for later

Kali Linux 2018 for testing and maintaining Windows security - Wolf Halton and Bo Weaver [Interview]

Guest Contributor
17 Jan 2019
9 min read
Microsoft Windows is one of the two most common OSes, and managing its security has spawned the discipline of Windows security. Kali Linux is the premier platform for testing and maintaining Windows security. Kali is built on the Debian distribution of Linux and shares the legendary stability of that OS. This lets you focus on network penetration, password cracking, and using forensics tools, and not the OS. In this interview, we talk to two experts, Wolf Halton and Bo Weaver, on using Kali Linux for pentesting. We also discuss their book Kali Linux 2018: Windows Penetration Testing - Second Edition. Read also: Kali Linux 2018 for testing and maintaining Windows security - Interview with Wolf Halton and Bo Weaver - Part 2 Kali Linux is the premier platform for testing and maintaining Windows security. According to you, what makes it ideal to use? Bo Weaver: First, it runs on Linux and is built on Debian Linux.  Second, the people at Offensive Security do a fantastic job of keeping it updated and stable with the latest tools to support not just pentesting but also forensics work or network analysis and diagnostics.  You can tell that this platform is built and maintained by real security experts and isn’t some distro thrown together by some marketing folks to make a buck. Wolf Halton: Kali is a very stable and extensible open source platform.  Offensive Security’s first security platform, BackTrack, was customised in a non-Posix way, breaking from UNIX or other Linux distros by putting the security tools in unexpected places in the filesystem.  Since Kali was first released, they used Debian Testing as a base, and adhered to the usual file locations. This made Kali Linux far easier to use. The normalization of the OS behind the Kali Linux distro makes it more productivity-friendly than most of the other “Security Distros,” which are usually too self-consciously different. Here, the developers are building their space in the mass of distros by how quirky the interface or how customizable the installation process has to be. Why do you love working with Kali Linux? Bo Weaver: I appreciate it’s stability.  In all the years I have used Kali on a daily basis, I have had only one failure to update properly.  Even with this one failure, I didn’t have any data loss. I run Kali as my “daily driver” on both my personal and company laptop, so one failure in all that time is nothing.  I even do my writing from my Kali machines. Yes I do all my normal computing from a normal user account and NOT root! I don’t have to go looking for a tool. Any tool that I need is either installed or is in the repo.  Since everything comes from the same repo, updates to all my tools and the system is just a simple command to keep everything updated. Wolf Halton: Kali is a stable platform, based upon a major distribution with which I am very familiar.  There are over 400 security tools in the Kali repos, and it can also draw directly from the Debian Testing repos for even more tools.  I always add a few applications on top of the installation default set of packages, but the menus work predictably, allowing me to install what I need without having to create a whole new menu system to get to them. Can you tell the readers about some advantages and disadvantages of using Kali Linux for pentesting? Bo Weaver: I really can’t think of a disadvantage. The biggest advantage is that all these tools are in one toolbox (Kali). I remember a time when building a pentesting machine would take a week, having to go out, and find and build the tools separately.  Most tools had to be manually compiled for the machine. Remember “make”, “make install”? Then to have it bork over a missing library file. In less than an hour, you can have a working pentesting machine running. As mentioned earlier, Kali has the tools to do any security job, not just pentesting, such as pulling evidence from a laptop for legal reasons,  analyzing a network, finding what is breaking your network, breaking into a machine because the passwords are lost. Also, it runs on anything from a high-end workstation to a Raspberry Pi or a USB drive with no problem. Wolf Halton: The biggest disadvantage is for Windows-Centric users who have never used any other operating system.  In our book, we try to ease these users into the exciting world of Linux. The biggest advantage is that the Kali Linux distro is in constant development.  I can be sure that there will be a Kali distro available even if I wander off for a year.  This is a great benefit for people who only use Linux when they want to run an ad hoc penetration test. Can you give us a specific example (real or fictional) of why Kali Linux is the ideal solution to go for? Bo Weaver: There are other distros out there for this use.  Most don’t have the completeness of toolsets. Most security distros are set up to be run from a DVD and only contain a few tools to do a couple of tasks and not all security tasks.  BlackArch Linux is the closest to Kali in comparison. BlackArch is built on Arch Linux which is a bleeding-edge distro which doesn’t have the stability of Debian.  Sometimes Arch will bork on an update due to bleeding-edge buggy code in an update. This is fine in a testing environment but when working in production, you need your system to run at the time of testing.  It’s embarrassing to call the customer and say you lost three hours on a test fixing your machine. I’m not knocking BlackArch. They did a fine job on the build and the toolsets included. I just don’t trust Arch to be stable enough for me.  This is not saying anything bad about Arch Linux. It does have its place in the distro world and does a fine job of filling its place in this world. Some people like bleeding edge, it’s just a personal choice. The great thing about Linux overall is that you have choices.  You’re not locked into one way a system looks or works. Kali comes with five different desktop environments, so you can choose which one is the best for you.  I personally like KDE. Wolf Halton: I have had to find tools for various purposes: Tools to recover data from failed hard-drives, Tools to stress-test hundreds of systems at a time, Tools to check whole networks at a time for vulnerabilities, Tools to check for weak passwords, Tools to perform Phishing tests on email users, Tools to break into Windows machines, security appliances and network devices. Kali Linux is the one platform where I could find multiple tools to perform all of these tasks and many more. Congratulations on your recent book, Kali Linux 2018: Windows Penetration Testing - Second Edition. Can you elaborate on the key takeaways for readers? Bo Weaver: I hope the readers come out with a greater understanding of system and network security and how easy it is to breach a system if simple and proper security rules are not followed.  By following simple no-cost rules like properly updating your systems and proper network segmentation, you can defeat most of the exploits in the book. Over the years, Wolf and I have been asked by a lot of Windows Administrators “How do you do a pentest?”  This person doesn’t want a simple glossed over answer. They are an engineer and understand their systems and how they work; they want a blow by blow description on actually how you broke it, so they can understand the problem and properly fix it.  The book is the perfect solution for them. It contains methods we use in our work on a daily basis, from scanning to post exploitation work. Also, I hope the readers find how easy Linux is to use as a desktop workstation and the advantages in security when using Linux as your workstation OS and do the switch from Windows to the Linux Desktop. I want to thank the readers of our book and hope they walk away with a greater understanding of system security. Wolf Halton: The main thing we tried to do with both the first and second edition of this book is to give a useful engineer-to-engineer overview of the possibilities of using Kali to test one’s own network, and including very specific approaches and methods to prove their network’s security.  We never write fictionalized, unworkable testing scenarios, as we believe our readers want to actually know how to improve their craft and make their networks safer, even though there is no budget for fancy-schmancy proprietary Windows-based security tools that make their non-techie managers feel safer. The world of pentesting is still edgy and interesting, and we try to infuse the book with our own keen interest in testing and developing attack models before the Red-Team hackers get there. Thanks Bo and Wolf for a very insightful perspective into the world of pentesting and on Kali Linux! Readers, if you are looking for help to quickly pentest your system and network using easy-to-follow instructions and support images, Kali Linux 2018: Windows Penetration Testing - Second Edition might just be the book for you. Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Bo Weaver is an old school ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on a R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990's and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint a Atlanta based security consulting company. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 6043

article-image-we-discuss-the-key-trends-for-web-and-app-developers-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

We discuss the key trends for web and app developers in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
How will web and app development evolve in 2019? What are some of the key technologies that you should be investigating if you want to stay up to date in the new year? And what can give you a competitive advantage? This post should help you get the lowdown on some of the shifting trends to be aware of, but I also sat down to discuss some of these issues with my colleague Stacy in the second Packt podcast. https://soundcloud.com/packt-podcasts/why-the-stack-will-continue-to-shrink-for-app-and-web-developers-in-2019 Let us know what you think - and if there's anything you'd like us to discuss on future podcasts, please get in touch!
Read more
  • 0
  • 0
  • 4274

article-image-listen-we-discuss-why-chaos-engineering-and-observability-will-be-important-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

Listen: We discuss why chaos engineering and observability will be important in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
This week I published a post that explored some of the key trends in software infrastructure that security engineers, SREs, and SysAdmins should be paying attention to in 2019. There was clearly a lot to discuss - which is why I sat down with my colleague Stacy Matthews to discuss some of the topics explored in the post in a little more. Enjoy! https://soundcloud.com/packt-podcasts/why-observability-and-chaos-engineering-will-be-vital-in-2019 What do you think? Is chaos engineering too immature for widespread adoption? And how easy will it be to begin building for observability?
Read more
  • 0
  • 0
  • 3543

article-image-deep-learning-is-not-an-optimum-solution-for-every-problem-faced-an-interview-with-valentino-zocca
Sunith Shetty
14 Nov 2018
11 min read
Save for later

“Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca

Sunith Shetty
14 Nov 2018
11 min read
Over the past few years, we have seen some advanced technologies in artificial intelligence shaping human life. Deep learning (DL) has become the main driving force in bringing new innovations in almost every industry. We are sure to continue to see DL everywhere. Most of the companies including startups are already integrating deep learning into their own day-to-day process. Deep learning techniques and algorithms have made building advanced neural networks practically feasible, thanks to high-level open source libraries such as TensorFlow, Keras, PyTorch and more. We recently interviewed Valentino Zocca, a deep learning expert and the author of the book, Python Deep Learning. Valentino explains why deep learning is getting so much hype, and what's the roadmap ahead in terms of new technologies and libraries. He will also talks about how major vendors and tech-savvy startups adopt deep learning within their organization. Being a consultant and an active developer he is expecting a better approach than back-propagation for carrying out various deep learning tasks. Author’s Bio Valentino Zocca graduated with a Ph.D. in mathematics from the University of Maryland, USA, with a dissertation in symplectic geometry, after having graduated with a laurel in mathematics from the University of Rome. He spent a semester at the University of Warwick. After a post-doc in Paris, Valentino started working on high-tech projects in the Washington, D.C. area and played a central role in the design, development, and realization of an advanced stereo 3D Earth visualization software with head tracking at Autometric, a company later bought by Boeing. At Boeing, he developed many mathematical algorithms and predictive models, and using Hadoop, he has also automated several satellite-imagery visualization programs. He has since become an expert on machine learning and deep learning and has worked at the U.S. Census Bureau and as an independent consultant both in the US and in Italy. He has also held seminars on the subject of machine learning and deep learning in Milan and New York. Currently, Valentino lives in New York and works as an independent consultant to a large financial company, where he develops econometric models and uses machine learning and deep learning to create predictive models. But he often travels back to Rome and Milan to visit his family and friends. Key Takeaways Deep learning is one of the most adopted techniques used in image and speech recognition and anomaly detection research and development areas. Deep learning is not the optimum solution for every problem faced. Based on the complexity of the challenge, the neural network building can be tricky. Open-source tools will continue to be in the race when compared to enterprise software. More and more features are expected to improve on providing efficient and powerful deep learning solutions. Deep learning is used as a tool rather than a solution across organizations. The tool usage can differ based on the problem faced. Emerging specialized chips expected to bring more developments in deep learning to mobile, IoT and security domain. Valentino Zocca states We have a quantity vs. quality problem. We will be requiring better paradigms and approaches in the future which can be improved through research driven innovative solutions instead of relying on hardware solutions. We can make faster machines, but our goal is really to make more intelligent machines for performing accelerated deep learning and distributed training. Full Interview Deep learning is as much infamous as it is famous in the machine learning community with camps supporting and opposing the use of DL passionately. Where do you fall on this spectrum? If you were given a chance to convince the rival camp with 5-10 points on your stand about DL, what would your pitch be like? The reality is that Deep Learning techniques have their own advantages and disadvantages. The areas where Deep Learning clearly outperforms most other machine learning techniques are in image and speech recognition and anomaly detection. One of the reasons why Deep Learning does so much better is that these problems can be decomposed into a hierarchical set of increasingly complex structures, and, in multi-layer neural nets, each layer learns these structures at different levels of complexity. For example, an image recognition, the first layers will learn about the lines and edges in the image. The subsequent layers will learn how these lines and edges get together to form more complex shapes, like the eyes of an animal, and finally the last layers will learn how these more complex shapes form the final image. However, not every problem can suitably be decomposed using this hierarchical approach. Another issue with Deep Learning is that it is not yet completely understood how it works, and some areas, for example, banking, that are heavily regulated, may not be able to easily justify their predictions. Finally, many neural nets may require a heavier computational load than other classical machine learning techniques. Therefore, the reality is that one still needs a proficient machine learning expert who deeply understands the functioning of each approach and can make the best decision depending on each problem. Deep Learning is not, at the moment, a complete solution to any problem, and, in general, there can be no definite side to pick, and it really depends on the problem at hand. Deep learning can conquer tough challenges, no doubt. However, there are many common myths and realities around deep learning. Would you like to give your supporting reasoning on whether the following statements are myth or fact? You need to be a machine learning expert or a math geek to build deep learning models We need powerful hardware resources to use deep learning Deep learning models are always learning, they improve with new data automagically Deep learning is a black box, so we should avoid using it in production environments or in real-world applications. Deep learning is doomed to fail. It will be replaced eventually by data sparse, resource economic learning methods like meta-learning or reinforcement learning. Deep learning is going to be central to the progress of AGI (artificial general intelligence) research Deep Learning has become almost a buzzword, therefore a lot of people are talking about it, sometimes misunderstanding how it works. People hear the word DL together with "it beats the best player at go", "it can recognize things better than humans" etc., and people think that deep learning is a mature technology that can solve any problem. In actuality, deep learning is a mature technology only for some specific problems, you do not solve everything with deep learning and yet at times, whatever the problem, I hear people asking me "can't you use deep learning for it?" The truth is that we have lots of libraries ready to use for deep learning. For example, you don’t need to be a machine learning expert or a math geek to build simple deep learning models for run-of-the-mill problems, but in order to solve for some of the challenges that less common issues may present, a good understanding of how a neural network works may indeed be very helpful. Like everything, you can find a grain of truth in each of those statements, but they should not be taken at face value. With MLaaS being provided by many vendors from Google to AWS to Microsoft, deep learning is gaining widespread adoption not just within large organizations but also by data-savvy startups. How do you view this trend? More specifically, is deep learning being used differently by these two types of organizations? If so, what could be some key reasons? Deep Learning is not a monolithic approach. We have different types of networks, ANNs, CNNs, LSTMs, RNNs, etc. Honestly, it makes little sense to ask if DL is being used differently by different organizations. Deep Learning is a tool, not a solution, and like all tools it should be used differently depending on the problem at hand, not depending on who is using it. There are many open source tools and enterprise software (especially the ones which claim you don't need to code much) in the race. Do you think this can be the future where more and more people will opt for ready-to-use (MLaaS) enterprise backed cognitive tools like IBM Watson rather than open-source tools? This holds true for everything. At the beginning of the internet, people would write their own HTML code for their web pages, now we use tools who do most of the work for us. But if we want something to stand-out we need a professional designer. The more a technology matures, the more ready-to-use tools will be available, but that does not mean that we will never need professional experts to improve on those tools and provide specialized solutions. Deep learning is now making inroads to mobile, IoT and security domain as well. What makes DL great for these areas? What are some challenges you see while applying DL in these new domains? I do not have much experience with DL in mobiles, but that is clearly a direction that is becoming increasingly important. I believe we can address these new domains by building specialized chips. Deep learning is a deeply researched topic within machine learning and AI communities. Every year brings us new techniques from neural nets to GANs, to capsule networks that then get widely adopted both in research and in real-world applications. What are some cutting-edge techniques you foresee getting public attention in deep learning in 2018 and in the near future? And why? I am not sure we will see anything new in 2018, but I am a big supporter of the idea that we need a better paradigm that can excel more at inductive reasoning rather than just deductive reasoning. At the end of last year, even DL pioneer Geoff Hinton admitted that we need a better approach than back-propagation, however, I doubt we will see anything new coming out this year, it will take some time. We keep hearing noteworthy developments in AI and deep learning by DeepMind and OpenAI. Do you think they have the required armory to revolutionize how deep learning is performed? What are some key challenges for such deep learning innovators? As I mentioned before, we need a better paradigm, but what this paradigm is, nobody knows. Gary Marcus is a strong proponent of introducing more structure in our networks, and I do concur with him, however, it is not easy to define what that should be. Many people want to use the brain as a model, but computers are not biological structures, and if we had tried to build airplanes by mimicking how a bird flies we would not have gone very far. I think we need a clean break and a new approach, I do not think we can go very far by simply refining and improving what we have. Improvement in processing capabilities and the availability of custom hardware have propelled deep learning into production-ready environments in recent years. Can we expect more chips and other hardware improvements in the coming years for GPU accelerated deep learning and distributed training? What other supporting factors will facilitate the growth of deep learning? Once again, foreseeing the future is not easy, however, as these questions are related, I think only so much can be gained by improving chips and GPUs. We have a quantity vs. quality problem. We can improve quantity (of speed, memory, etc.) through hardware improvements, but the real problem is that we need a real quality improvement, better paradigms, and approaches, that needs to be achieved through research and not with hardware solutions. We can make faster machines, but our goal is really to make more intelligent machines. A child can learn by seeing just a few examples, we should be able to create an approach that allows a machine to also learn from few examples, not by cramming millions of examples in a short time. Would you like to add anything more to our readers? Deep Learning is a fascinating discipline, and I would encourage anyone who wanted to learn more about it to approach it as a research project, without underestimating his or her own creativity and intuition. We need new ideas. If you found this interview to be interesting, make sure you check out other insightful interviews on a range of topics: Blockchain can solve tech’s trust issues – Imran Bashir “Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan “Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou
Read more
  • 0
  • 0
  • 6655
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-with-python-you-can-create-self-explanatory-concise-and-engaging-data-visuals-and-present-insights-that-impact-your-business-tim-grosmann-and-mario-dobler-interview
Savia Lobo
10 Nov 2018
6 min read
Save for later

“With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business” - Tim Großmann and Mario Döbler [Interview]

Savia Lobo
10 Nov 2018
6 min read
Data today is the world’s most important resource. However, without properly visualizing your data to discover meaningful insights, it’s useless. Creating visualizations helps in getting a clearer and concise view of the data, making it more tangible for (non-technical) audiences. To further illustrate this, below are questions aimed at giving you an idea why data visualization is so important and why Python should be your choice. In a recent interview, Tim Großmann and Mario Döbler, the authors of the course titled, ‘Data Visualization with Python’, shared with us the importance of Data visualization and why Python is the best fit to carry out Data Visualization. Key Takeaways Data visualization is a great way, and sometimes the only way, to make sense of the constantly growing mountain of data being generated today. With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business. Your data visualizations will make information more tangible for the stakeholders while telling them an interesting story. Visualizations are a great tool to transfer your understanding of the data to a less technical co-worker. This builds a faster and better understanding of data. Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data, combined with the number of available libraries makes Python the best choice. Full Interview Why is Data Visualization important? What problem is it solving? As the amount of data grows, the need for developers with knowledge of data analytics and especially data visualization spikes. In recent years we have experienced an exponential growth of data. Currently, the amount of data doubles every two years. For example, more than eight thousand tweets are sent per second; and more than eight hundred photos are uploaded to Instagram per second. To cope with the large amounts of data, visualization is essential to make it more accessible and understandable. Everyone has heard of the saying that a picture is worth a thousand words. Humans process visual data better and faster than any other type of data. Another important point is that data is not necessarily the same as information. Often people aren’t interested in the data, but in some information hidden in the data. Data visualization is a great tool to discover the hidden patterns and reveal the relevant information. It bridges the gap between quantitative data and human reasoning, or in other words, visualization turns data into meaningful information. What other similar solutions or tools are out there? Why is Python better? Data visualizations can be created in many ways using many different tools. MATLAB and R are two of the available languages that are heavily used in the field of data science and data visualization. There are also some non-coding tools like Tableau which are used to quickly create some basic visualizations. However, Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data, combined with the number of available libraries makes Python the best choice. In addition to all the mentioned perks, Python has an incredibly big ecosystem with thousands of active developers. Python really differs in a way that allows users to also build their own small additions to the tools they use, if necessary. There are examples of pretty much everything online for you to use, modify, and learn from. How can Data Visualization help developers? Give specific examples of how it can solve a problem. Working with, and especially understanding, large amounts of data can be a hard task. Without visualizations, this might even be impossible for some datasets. Especially if you need to transfer your understanding of the data to a less technical co-worker, visualizations are a great tool for a faster and better understanding of data. In general, looking at your data visualized often speaks more than a thousand words. Imagine getting a dataset which only consists of numerical columns. Getting some good insights into this data without visualizations is impossible. However, even with some simple plots, you can often improve your understanding of even the most difficult datasets. Think back to the last time you had to give a presentation about your findings and all you had was a table with numerical values in it. You understood it, but your colleagues sat there and scratched their heads. Instead had you created some simple visualizations, you would have impressed the entire team with your results. What are some best practices for learning/using Data Visualization with Python? Some of the best practices you should keep in mind while visualizing data with Python are: Start looking and experimenting with examples Start from scratch and build on it Make full use of documentation Use every opportunity you have with data to visualize it To know more about the best practices in detail, read our detailed notes on 4 tips for learning Data Visualization with Python. What are some myths/misconceptions surrounding Data Visualization with Python? Data visualizations are just for data scientists Its technologies are difficult to learn Data visualization isn’t needed for data insights Data visualization takes a lot of time Read about these myths in detail in our article, ‘Python Data Visualization myths you should know about’. Data visualization in combination with Python is an essential skill when working with data. When properly utilized, it is a powerful combination that not only enables you to get better insights into your data but also gives you the tool to communicate results better. Data nowadays is everywhere so developers of every discipline should be able to work with it and understand it. About the authors Tim Großmann Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning, using state-of-the-art algorithms to develop cutting-edge products. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. Setting up Apache Druid in Hadoop for Data visualizations [Tutorial] 8 ways to improve your data visualizations Getting started with Data Visualization in Tableau  
Read more
  • 0
  • 0
  • 5799

article-image-instead-of-data-scientists-working-on-their-models-and-advancing-ai-they-are-spending-their-time-doing-deepops-work-missinglink-ceo-yogi-taguri-interview
Amey Varangaonkar
08 Nov 2018
10 min read
Save for later

“Instead of data scientists working on their models and advancing AI, they are spending their time doing DeepOps work”, MissingLink CEO, Yosi Taguri [Interview]

Amey Varangaonkar
08 Nov 2018
10 min read
Machine learning has shown immense promise across domains and industries over the recent years. From helping with the diagnosis of serious ailments to powering autonomous vehicles, machine learning is finding useful applications across a spectrum of industries. However, the actual process of delivering business outcomes using machine learning currently takes too long and is too expensive, forcing some businesses to look for other less burdensome alternatives. MissingLink.ai is a recently-launched platform to fix just this problem. It enables data scientists to spend less time on the grunt work by automating and streamlining the entire machine learning cycle, giving them more time to apply actionable insights gleaned from the data. Key Takeaways Processing and managing the sheer volume of data is one of the key challenges that today’s AI tools face Yosi thinks the idea of companies creating their own machine learning infrastructure doesn’t make a lot of sense. Data professionals should be focusing on more important problems within their organizations by letting the platform take care of the grunt work. MissingLink.ai is an innovative AI platform born out of the need to simplify AI development, by taking away the common, menial data processing tasks from data scientists and allowing them to focus on the bigger data-related issues; through experiment management, data management and resource management. MissingLink is a part of the Samsung NEXT product development team that aims to help businesses automate and accelerate their projects using machine learning We had the privilege of interviewing Mr. Yosi Taguri, the founder and CEO of MissingLink, to know more about the platform and how it enables more effective deep learning. What are the biggest challenges that companies come across when trying to implement a robust Machine Learning/Deep Learning pipeline in their organization? How does it affect their business? The biggest challenge, simply put, is that today’s AI tools can’t keep up with the amount of data being produced. And it’s only going to get more challenging from here! As datasets continue to grow, they will require more and more compute power, which means we risk falling farther behind unless we change the tools we’re using. While everyone is talking about the promise of machine learning, the truth is that today, assessing data is still too time-consuming and too expensive. Engineers are spending all their time managing the sheer volume of data, rather than actually learning from it and being empowered to make changes. Let’s talk about MissingLink.ai, the platform you and your team have launched for accelerating deep learning across businesses. Why the name MissingLink? What was the motivation to launch this platform? The name is actually a funny story, and it ties pretty neatly into why we created the platform. When we were starting out three years ago, deep learning was still a relatively new concept and my team and I were working hard to master the intricacies of it. As engineers, we primarily worked with code, so to be able to solve problems with data was a fascinating new challenge for us. We quickly realized that deep learning is really hard and moves very, very slow. So we set out to solve that problem of how to build really smart machines really fast. By comparison, we thought of it through the lens of software development. Our goal was to accelerate from a glacial pace to building machine learning algorithms faster -- because we felt that there was something missing, a missing link if you will. MissingLink is a part of the growing Samsung NEXT product development team. How does it feel? What role do you think MissingLink will play in Samsung NEXT’s plans and vision going forward? Samsung NEXT’s broader mission is to help startups reach their full potential and achieve their goals. More specifically, Samsung NEXT discovers and backs the engineers, innovators, builders, and entrepreneurs who will help Samsung define the future of software and services. The Samsung NEXT product development team is focused on building software and services that take advantage of and accelerate opportunities related to some of the biggest shifts in technology including automation, supply and demand, and interfaces. This will require hardware and software to seamlessly come together. Over the past few years, nearly all startups are leveraging AI for some component of their business, yet practical progress has been slower than promised. MissingLink is a foundational tool to enable the progress of these big changes, helping entrepreneurs with great use cases for machine learning to accelerate their projects. Could you give us the key features of Missinglink.ai that make it stand out from the other AI platforms available out there? How will it help data scientists and ML engineers build robust, efficient machine learning models? First off, MissingLink.ai is the most comprehensive AI platform out there. It handles the entire deep learning lifecycle and all its elements, including code, data, experiments, and resources. I’d say that our top features include: Experiment Management: See and compare the entire history of experiments. MissingLink.ai auto-documents every aspect Data Management: A unique data store tracks data versions used in every experiment, streams data, caches it locally and only syncs changes Resources Management: Manages your resources with no extra infrastructure costs using your AWS or other cloud credentials. It grows and shrinks your cloud resources as needed. These features, together with our intuitive interface, really put data scientists and engineers in the driver's seat when creating AI models. Now they can have more control and spend less energy repeating experiments, giving them more time to focus on what is important. Your press release on the release of MissingLink states “the actual process of delivering business outcomes currently takes too long and it is too expensive. MissingLink.ai was born out of a desire to fix that.” Could you please elaborate on how MissingLink makes deep learning less expensive and more accessible? Companies are currently spending too much time and devoting too many resources to the menial tasks that are necessary for building machine learning models. The more time data scientists spend on tasks like spinning machines, copying files and DevOps, the more money that a company is wasting. MissingLink changes that through the introduction of something we’re calling DeepOps or deep learning operations, which allows data scientists to focus on data science and let the machine take care of the rest. It’s like DevOps where the role is about how to make the process of software development more efficient and productionalized, but the difference is no one has been filling this role and it’s different enough that its very specific to the task of deep learning. Today, instead of data scientists working on their models and advancing AI, they are spending their time doing this DeepOps work. MissingLink reduces load time and facilitates easy data exploration by eliminating the need to copy files through data-management in a version-aware data store. Most of the businesses are moving their operations on to the cloud these days, with AWS, Azure, GCP, etc. being their preferred cloud solutions. These platforms have sophisticated AI offerings of their own. Do you see AI platforms such as MissingLink.ai as a competition to these vendors, or can the two work collaboratively? I wouldn’t call cloud companies our competitors; we don’t provide the cloud services they do, and they don’t provide the DeepOps service that we do. Yes, we all are trying to simplify AI, but we’re going about it in very different ways. We can actually use a customer’s public cloud provider as the infrastructure to run the MissingLink.ai platform. If customers provide us with their cloud credentials, we can even manage this for them directly. Concepts such as Reinforcement Learning and Deep Learning for Mobile are getting a lot of traction these days, and have moved out of the research phase into the application/implementation phase. Soon, they might start finding extensive business applications as well. Are there plans to incorporate these tools and techniques in the platform in the near future? We support all forms of Deep Learning, including Reinforcement Learning. On the Deep Learning for Mobile side, we think the Edge is going to be a big thing as more and more developers around the world are exposed to Deep Learning. We plan to support it early next year. Currently, data privacy and AI ethics have become a focal point of every company’s AI strategy. We see tech conglomerates increasingly coming under the scanner for ignoring these key aspects in their products and services. This is giving rise to an alternate movement in AI, with privacy and ethics-centric projects like Deon, Vivaldi, and Tim Berners-Lee’s Solid. How does MissingLink approach the topics of privacy, user consent, and AI ethics? Are there processes/tools or principles in place in the MissingLink ecosystem or development teams that balance these concerns? When we started MissingLink we understood that data is the most sensitive part of Deep Learning. It is the new IP. Companies spend 80% of their time attending to data, refining it, tagging it and storing it, and therefore are reluctant to upload it to a 3rd party vendor. We have built MissingLink with that in mind - our solution allows customers to simply point us in the direction of where their data is stored internally, and without moving it or having to access it as a SaaS solution we are able to help them manage it by enabling version management as they do with code. Then we can stream it directly to the machines that need the data for processing and document which data was used for reproducibility. Finally, where do you see machine learning and deep learning heading in the near future? Do you foresee a change in the way data professionals work today? How will platforms like MissingLink.ai change the current trend of working? Right now, companies are creating their own machine learning infrastructure - and that doesn’t make sense. Data professionals can and should be focusing on more important problems within their organizations. Platforms like MissingLink.ai free data scientists from the grunt work it takes to upkeep the infrastructure, so they can focus on bigger picture issues. This is the work that is not only more rewarding for engineers to work on, but also creates the unique value that companies need to compete.  We’re excited to help empower more data professionals to focus on the work that actually matters. It was wonderful talking to you, and this was a very insightful discussion. Thanks a lot for your time, and all the best with MissingLink.ai! Read more Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Tesseract version 4.0 releases with new LSTM based engine, and an updated build system Baidu releases a new AI translation system, STACL, that can do simultaneous interpretation
Read more
  • 0
  • 0
  • 4606

article-image-interview-tirthajyoti-sarkar-and-shubhadeep-roychowdhury-data-wrangling-with-python
Sugandha Lahoti
25 Oct 2018
7 min read
Save for later

“Data is the new oil but it has to be refined through a complex processing network” - Tirthajyoti Sarkar and Shubhadeep Roychowdhury [Interview]

Sugandha Lahoti
25 Oct 2018
7 min read
Data is the new oil and is just as crude as unrefined oil. To do anything meaningful - modeling, visualization, machine learning, for predictive analysis – you first need to wrestle and wrangle with data. We recently interviewed Dr. Tirthajyoti Sarkar and Shubhadeep Roychowdhury, the authors of the course Data Wrangling with Python. They talked about their new course and discuss why do data wrangling and why use Python to do it. Key Takeaways Python boasts of a large, broad library equipped with a rich set of modules and functions, which you can use to your advantage and manipulate complex data structures NumPy, the Python library for fast numeric array computations and Pandas, a package with fast, flexible, and expressive data structures are helpful in working with “relational” or “labeled” data. Web scraping or data extraction becomes easy and intuitive with Python libraries, such as BeautifulSoup4 and html5lib. Regex, the tiny, highly specialized programming language inside Python can create patterns that help match, locate, and manage text for large data analysis and searching operations Present interesting, interactive visuals of your data with Matplotlib, the most popular graphing and data visualization module for Python. Easily and quickly separate information from a huge amount of random data using Pandas, the preferred Python tool for data wrangling and modeling. Full Interview Congratulations on your new course ‘Data wrangling with Python’. What this course is all about? Data science is the ‘sexiest job’ of 21st century’ (at least until Skynet takes over the world). But for all the emphasis on ‘Data’, it is the ‘Science’ that makes you - the practitioner - valuable. To practice high-quality science with data, first you need to make sure it is properly sourced, cleaned, formatted, and pre-processed. This course teaches you the most essential basics of this invaluable component of the data science pipeline – data wrangling. What is data wrangling and why should you learn it well? “Data is the new Oil” and it is ruling the modern way of life through incredibly smart tools and transformative technologies. But oil from the rig is far from being usable. It has to be refined through a complex processing network. Similarly, data needs to be curated, massaged and refined to become fit for use in intelligent algorithms and consumer products. This is called “wrangling” and (according to CrowdFlower) all good data scientists spend almost 60-80% of their time on this, each day, every project. It generally involves the following: Scraping the raw data from multiple sources (including web and database tables), Inputing, formatting, transforming – basically making it ready for use in the modeling process (e.g. advanced machine learning), Handling missing data gracefully, Detecting outliers, and Being able to perform quick visualizations (plotting) and basic statistical analysis to judge the quality of your formatted data This course aims to teach you all the core ideas behind this process and to equip you with the knowledge of the most popular tools and techniques in the domain. As the programming framework, we have chosen Python, the most widely used language for data science. We work through real-life examples and at the end of this course, you will be confident to handle a myriad array of sources to extract, clean, transform, and format your data for further analysis or exciting machine learning model building. Walk us through your thinking behind how you went about designing this course. What’s the flow like? How do you teach data wrangling in this course? The lessons start with a refresher on Python focusing mainly on advanced data structures, and then quickly jumping into NumPy and Panda libraries as fundamental tools for data wrangling. It emphasizes why you should stay away from traditional ways of data cleaning, as done in other languages, and take advantage of specialized pre-built routines in Python. Thereafter, it covers how using the same Python backend, one can extract and transform data from a diverse array of sources - internet, large database vaults, or Excel financial tables. Further lessons teach how to handle missing or wrong data, and reformat based on the requirement from a downstream analytics tool. The course emphasizes learning by real example and showcases the power of an inquisitive and imaginative mind primed for success. What other tools are out there? Why do data wrangling with Python? First, let us be clear that there is no substitute for the data wrangling process itself. There is no short-cut either. Data wrangling must be performed before the modeling task but there is always the debate of doing this process using an enterprise tool or by directly using a programming language and associated frameworks. There are many commercial, enterprise-level tools for data formatting and pre-processing, which does not involve coding on the part of the user. Common examples of such tools are: General purpose data analysis platforms such as Microsoft Excel (with add-ins) Statistical discovery package such as JMP (from SAS) Modeling platforms such as RapidMiner Analytics platforms from niche players focusing on data wrangling such as – Trifacta, Paxata, Alteryx At the end of the day, it really depends on the organizational approach whether to use any of these off-the-shelf tools or to have more flexibility, control, and power by using a programming language like Python to perform data wrangling. As the volume, velocity, and variety (three V’s of Big Data) of data undergo rapid changes, it is always a good idea to develop and nurture significant amount of in-house expertise in data wrangling. This is done using fundamental programming frameworks so that an organization is not betrothed to the whims and fancies of any particular enterprise platform as a basic task as data wrangling. Some of the obvious advantages of using an open-source, free programming paradigm like Python for data wrangling are: General purpose open-source paradigm putting no restriction on any of the methods you can develop for the specific problem at hand Great eco-system of fast, optimized, open-source libraries, focused on data analytics Growing support to connect Python for every conceivable data source types, Easy interface to basic statistical testing and quick visualization libraries to check data quality Seamless interface of the data wrangling output to advanced machine learning models – Python is the most popular language of choice of machine learning/artificial intelligence these days. What are some best practices to perform data wrangling with Python? Here are five best practices that will help you out in your data wrangling journey with Python. And in the end, all you’ll have is clean and ready to use data for your business needs. Learn the data structures in Python really well Learn and practice file and OS handling in Python Have a solid understanding of core data types and capabilities of Numpy and Pandas Build a good understanding of basic statistical tests and a panache for visualization Apart from Python, if you want to master one language, go for SQL What are some misconceptions about data wrangling? Though data wrangling is an important task, there are certain myths associated with data wrangling which developers should be cautious of. Myths such as: Data wrangling is all about writing SQL query Knowledge of statistics is not required for data wrangling You have to be a machine learning expert to do great data wrangling Deep knowledge of programming is not required for data wrangling Learn in detail about these misconceptions. We hope that these misconceptions would help you realize that data wrangling is not as difficult as it seems. Have fun wrangling data! About the authors Dr. Tirthajyoti Sarkar works as a Sr. Principal Engineer in the semiconductor technology domain where he applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup. He holds a Master Degree in Computer Science from West Bengal University Of Technology and certifications in Machine Learning from Stanford. 5 best practices to perform data wrangling with Python 4 misconceptions about data wrangling Data cleaning is the worst part of data analysis, say data scientists
Read more
  • 0
  • 0
  • 4733

article-image-git-like-all-other-version-control-tools-exists-to-solve-for-one-problem-change-joseph-muli-and-alex-magana-interview
Packt Editorial Staff
09 Oct 2018
5 min read
Save for later

“Git, like all other version control tools, exists to solve for one problem: change” - Joseph Muli and Alex Magana [Interview]

Packt Editorial Staff
09 Oct 2018
5 min read
An unreliable versioning tool makes product development a herculean task. Creating and enforcing checks and controls for the introduction, scrutiny, approval, merging, and reversal of changes in your source code, are some effective methods to ensure a secure development environment. Git and GitHub offer constructs that enable teams to conduct version control and collaborative development in an effective manner.  When properly utilized, Git and GitHub promote agility and collaboration across a team, and in doing so, enable teams to focus and deliver on their mandates and goals. We recently interviewed Joseph Muli and Alex Magana, the authors of Introduction to Git and GitHub course. They discussed the various benefits of Git and GitHub while sharing some best practices and myths. Author Bio Joseph Muli loves programming, writing, teaching, gaming, and travelling. Currently, he works as a software engineer at Andela and Fathom, and specializes in DevOps and Site Reliability. Previously, he worked as a software engineer and technical mentor at Moringa School. You can follow him on LinkedIn and Twitter. Alex Magana loves programming, music, adventure, writing, reading, architecture, and is a gastronome at heart. Currently, he works as a software engineer with BBC News and Andela. Previously, he worked as a software engineer with SuperFluid Labs and Insync Solutions. You can follow him on LinkedIn or GitHub. Key Takeaways Securing your source code with version control is effective only when you do it the right way. Understanding the best practices used in version control can make it easier for you to get the most out of Git and GitHub. GitHub is loaded with an elaborate UI. It’ll immensely help your development process to learn how to navigate the GitHub UI and install the octo tree. GitHub is a powerful tool that is equipped with useful features. Exploring the Feature Branch Workflow and other forking features, such as submodules and rebasing, will enable you to make optimum use of the many features of GitHub. The more elaborate the tools, the more time they can consume if you don’t know your way through them. Master the commands for debugging and maintaining a repository, to speed up your software development process. Keep your code updated with the latest changes using CircleCI or TravisCI, the continuous integration tools from GitHub. The struggle isn’t over unless the code is successfully released to production. With GitHub’s release management features, you can learn to complete hiccup-free software releases. Full Interview Why is Git important? What problem is it solving? Git, like all other version control tools, exists to solve for one problem, change. This has been a recurring issue, especially when coordinating work on teams, both locally and distributed, that specifically being an advantage of Git through hubs such as GitHub, BitBucket and Gitlab. The tool was created by Linus Torvalds in 2005 to aid in development and contribution on the Linux Kernel. However, this doesn’t necessarily limit Git to code any product or project that requires or exhibits characteristics such as having multiple contributors, requiring release management and versioning stands to have an improved workflow through Git. This also puts into perspective that there is no standard, it’s advisable to use what best suits your product(s). What other similar solutions or tools are out there? Why is Git better? As mentioned earlier, other tools do exist to aid in version control. There are a lot of factors to consider when choosing a version control system for your organizations, depending on product needs and workflows. Some organizations have in-house versioning tools because it suits their development. Some organizations, for reasons such as privacy and security or support, may look for an integration with third-party and in-house tools. Git primarily exists to provide for a faster and distributed version system, that is not tied to a central repository, hub or project. It is highly scalable and portable. Other VC tools include Apache SubVersion, Mercurial and Concurrent Versions System (CVS). How can Git help developers? Can you list some specific examples (real or imagined) of how it can solve a problem? A simple way to define Git’s indispensability is enabling fast, persistent and accessible storage. This implies that changes to code throughout a product’s life cycle can be viewed and updated on demand, each with simple and compact commands to enable the process. Developers can track changes from multiple contributors, blame introduced bugs and revert where necessary. Git enables multiple workflows that align to practices such as Agile e.g. feature branch workflows and others including forking workflows for distributed contribution, i.e. to open source projects. What are some best tips for using Git and GitHub? These are some of the best practices you should keep in mind while learning or using Git and GitHub. Document everything Utilize the README.MD and wikis Keep simple and concise naming conventions Adopt naming prefixes Correspond a PR and Branch to a ticket or task. Organize and track tasks using issues. Use atomic commits [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘7 tips for using Git and GitHub the right way’.[/box] What are the myths surrounding Git and GitHub? Just as every solution or tool has its own positives and negatives, Git is also surrounded by myths one should be aware of. Some of which are: Git is GitHub Backups are equivalent to version control Git is only suitable for teams To effectively use Git, you need to learn every command to work [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘4 myths about Git and GitHub you should know about’.  [/box] GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitHub introduces ‘Experiments’, a platform to share live demos of their research projects  
Read more
  • 0
  • 0
  • 3513
article-image-discussing-sap-past-present-and-future-with-rehan-zaidi-senior-sap-abap-consultant-interview
Savia Lobo
04 Oct 2018
11 min read
Save for later

Discussing SAP: Past, present and future with Rehan Zaidi, senior SAP ABAP consultant [Interview]

Savia Lobo
04 Oct 2018
11 min read
SAP, the market-leading enterprise software, recently became the first European technology company to create an AI ethics advisory panel where they announced seven guiding principles for AI development. These guidelines revolve around recognizing AI’s significant impact on people and society. Also, last week, at the Microsoft Ignite conference, SAP, in collaboration with Microsoft and Adobe announced the Open Data Initiative. This initiative aims to help companies to better govern their data and support privacy and security initiatives. For SAP, this initiative will further bring advancements to its SAP C/4HANA and S/4HANA platforms. All of these actions emphasize SAP’s focus on transforming itself into a responsible data use company. We recently interviewed Rehan Zaidi, a senior SAP ABAP consultant. Rehan became one of the youngest authors on SAP worldwide when he was published in the prestigious SAP Professional Journal in the year 2001. He has written a number of books, and over 20 articles and professional papers for the SAP Professional Journal USA and HR Expert USA, part of the prestigious sapexperts.com library. Following are some of his views on the SAP community and products and how the SAP suite can benefit people including budding professionals, developers, and business professionals. Key takeaways SAP HANA was introduced to accelerate jobs 200 times faster while maintaining the efficiency. The introduction of SAP Leonardo brought in the next wave of AI, machine learning, and blockchain services via the SAP cloud platform and other standalone projects. Experienced ABAP developers should look forward to getting certified in one of the newest technologies such as HANA, and Fiori. SAP ERP Central Component (SAP ECC) is the on-premises version of SAP, and it is usually implemented in medium and large-sized companies. For smaller companies, SAP offers its Business One ERP platform. SAP Fiori is a line of SAP apps meant to address criticisms of SAP's user experience and UI complexity. Q.1. SAP is one of the most widely used ERP tools. How has it evolved over the past few years from the traditional on-premise model to keep up with the cloud-based trends? Yes. Let me cover the main points. SAP started in 1973 as a company and the first product SAP R/98 was launched. In 1979, SAP launched the R/2 design. It had most of the typical processes such as accounting, manufacturing processes, supply chain logistics, and human resources. Then came R/3  that brought the more efficient three-tier (Application server -  Database and the presentation (GUI)) architecture, with more new modules and functionalities added. It was a smart system fully configurable by functional consultants. This was further enhanced with Netweaver capability that brought the integration of the internet and SOA capability.  SAP introduced the ECC 5 and subsequently the ECC 6 Release. Mobility was later added that lets mobile applications running on devices to access the business processes in SAP and execute them. Both display and updation of SAP data was possible. HANA system was then introduced. It is very fast and efficient - allows you to do 200 times faster jobs than before Cloud systems then became available that let customers connect to SAP Cloud Platform via their on-premise systems and then get access to services such as Mobile Service for app protection, Mobile Service for SAP Fiori, among others. SAP Leonardo was finally introduced, as a way of bringing in next-gen AI, machine learning and blockchain services via standalone projects and the SAP cloud platform. Q.2. Being a Senior ABAP Programming Analyst, how does your typical day look like? Ahh. Well, a typical day! No two days are the same for us. Each morning we find ourselves confronting a problem whose solution is to be devised. A different problem every day- followed by a unique solution. We spend hours and hours finding issues in custom developed programs. We learn about making custom programs run faster. We get requirements of a wide variety of users. They may be in the Human Resource, Materials Management, Sales and Distribution or Finance, and so on. This requirement may be pertaining to an entirely new report or a dialog program having a set of screens. We even do Fiori ( using Javascript based library) applications that may be accessible from the PC or a mobile device. I even get requirements of teaching junior or trainee SAP developers on a wide variety of technologies. Q.3. Can you tell us about the learning curve for SAP? There are different job profiles related to SAP which range from executives to consultants and managers. How do each of them learn or update themselves on SAP? Yes, this is a very important question. A simple answer to this question is that “there is no end to learning and at any stage, learning is never enough,” no matter to which field within SAP you belong to. Things are constantly changing. The more you read and the more you work, you feel that there is a lot to be done. You need to constantly update yourself and learn about new technologies. There is plenty of material available on the internet. I usually refer to the Official SAP website for newer courses available. They even tell you for which background (managers, developers) the courses are relevant to. I also go to open.sap.com for new courses. Whether they are consultants (functional and technical), or managers, all of them need to keep themselves up-to-date. They must take new courses and learn about innovation in their technology. For example, HR must now study and try to learn about Successfactors. Even integration of SAP HANA with other software might be an interesting topic of today. There are Fiori and HANA related courses for Basis consultants and the corresponding tracks for developers. Some knowhow of newer technologies is also important for managers and executives, since your decisions may need to be adapted based on the underlying technologies running in your systems. You should know the pros and cons of all technologies in order to make the correct move for your business. Q.4. Many believe an SAP certification improves their chances of getting jobs at competitive salaries. How important are certifications? Which SAP certifications should a buddying developer look forward to obtain? When I did my Certification in October 2000, I used to think that Certifications are not important. But now I have realized, yes, it makes a difference.  Well, certifications are definitely a plus point. They enhance your CV and allow you to have an edge over those who are not certified.  I found some jobs adverts that specifically mention that certification will be required or will be advantageous. However, they are only useful when you have at least 4 years of experience. For a fresh graduate, a certification might not be very useful. A useful SAP consultant/developer is a combination of solid base/foundation of knowledge along with a touch of experience. I suggest all my juniors to go for Certifications in order to strengthen concepts, which include: C_C4C30_1711 - SAP Certified Development Associate – SAP Hybris Cloud for Customer C_CP_11 - SAP Certified Development Associate - SAP Cloud Platform C_FIORDEV_20 - SAP Certified Development Associate - SAP Fiori Application Developer C_HANADEV_13 - SAP Certified Development Associate - SAP HANA C_SMPNHB_30 - SAP Certified Development Associate - SAP Mobile Platform Application Development (SMP 3.0) C_TAW12_750 - SAP Certified Development Associate - ABAP with SAP NetWeaver 7.50 E_HANAAW_12 - SAP Certified Development Specialist - ABAP for SAP HANA For experienced ABAP developers, I suggest getting certified on the newest technologies such as HANA, and Fiori. They may help you get a project quicker and/or at a better rate than others. Q.5. The present buzz is around AI, machine learning, IoT, Big data, and many other emerging technologies. SAP Leonardo works on making it easy to create frameworks for harnessing the latest tech. What are your thoughts on SAP Leonardo? Leonardo is SAP’s response to an AI platform. It should be an important part of SAP’s offerings, mostly built on the SAP cloud platform. SAP has relaunched Leonardo as a digital innovation system. As I understand it, Leonardo allows customers to take advantage of artificial intelligence (AI), machine learning, advanced analytics and blockchain on their company’s data. SAP gives customers an efficient way of using these technologies to solve business issues. It allows you to build a system which, in conjunction with machine learning, searches for results that can be combined with SAP transactions. The benefit with SAP Leonardo is that all the company’s data is available right in the SAP system. Using Leonardo, you have access to all human resources data and any other module data residing in the ERP system. Any company from any industry can make use of Leonardo; it works equally well for retailers, food and beverage companies and medical industries, for organizations working in retail, manufacturing and automotive. An approach that works for one company in a given industry can be applied to other companies in that industry. Suppose a company operates sensors. They can link the sensor data with the data in their SAP systems and even link that with other data, and they can then use the Leonardo capabilities to solve problems or optimize performance. When a problem for one company in an industry is solved, a similar solution may be applied to the entire industry. Yes, in my opinion, Leonardo has a bright future and should be successful. For more information about Leonardo success stories, I encourage readers to check out SAP Leonardo Internet of Things Portfolio & Success Stories. Q. 6. You are currently writing a book on ABAP Objects and Design Patterns expected to be published by the end of 2018. What was your motivation behind writing it? Can you tell us more about ABAP objects? What should readers expect from this book? ABAP and ABAP Objects has gone tremendous changes since some time both on the features (and capability) as well as the syntax. It is the most unsung topic of today. It has been there for quite long but most developers are not aware of it or are not comfortable enough to use them in their day to day work. ABAP is a vast community with developers working in a variety of functional areas. The concepts covered in the book will be generic, allowing the learner to apply them to his or her particular area. This book will cover ABAP objects (the object-oriented extension of the SAP language ABAP) in the latest release of SAP NetWeaver 7.5 and explain the newest advancements. It will start with the programming of objects in general and the basics of ABAP language the developer needs to know to get started. The book will cover the most important topics needed on everyday support jobs and for succeeding in projects. The book will be goal-directed, not a collection of theoretical topics. It won’t just touch on the surface of ABAP objects, but will go in depth from building the basic foundation (e.g., classes and objects created locally and globally) to the intermediary areas (e.g., ALV programming, method chaining, polymorphism, simple and nested interfaces), and then finally into the advanced topics (e.g., shared memory, persistent Objects). The best practices for making better programs via ABAP objects will be shown at the end. No long stories, no boring theory, only pure technical concepts followed by simple examples using coding pertaining to football players. Everything will be presented in a clear, interesting manner, and readers will learn tips and tricks they can apply immediately. Learners, students, new SAP programmers and SAP developers with some experience can use this as an alternative to expensive training books. The book will also save reader’s time searching the internet for help writing new programs. Knowing ABAP objects is key for ABAP developers these days to move forward. Starting from simple ALV reporting requirements, or defining and catching exceptional situations that may occur in a program or even the enhancement technology of BAdIs that lets you enhance standard SAP applications require sound ABAP Objects understanding. In addition, Web Dynpro application development, the Business Object Processing Framework, and even OData service creation to expose data that can be used by Fiori apps all demand solid knowledge of ABAP objects. How to perform predictive forecasting in SAP Analytics Cloud Popular Data sources and models in SAP Analytics Cloud Understanding Text Search and Hierarchies in SAP HANA  
Read more
  • 0
  • 0
  • 5114

article-image-site-reliability-engineering-nat-welch-on-what-it-is-and-why-we-need-it-interview
Richard Gall
26 Sep 2018
4 min read
Save for later

Site reliability engineering: Nat Welch on what it is and why we need it [Interview]

Richard Gall
26 Sep 2018
4 min read
At a time when software systems are growing in complexity, and when the expectations and demands from users have never been more critical, it's easy to forget that just making things work can be a huge challenge. That's where site reliability engineering (SRE) comes in; it's one of the reasons we're starting to see it grow as a discipline and job role. The central philosophy behind site reliability engineering can be seen in trends like chaos engineering. As Gremlin CTO Matt Fornaciari said, speaking to us in June, "chaos engineering is simply part of the SRE toolkit." For site reliability engineers, software resilience isn't an optional extra - it's critical. In crude terms, downtime for a retail site means real monetary losses, but the example extends beyond that. Because people and software systems are so interdependent, SRE is a useful way for thinking about how we build software more broadly. To get to the heart of what site reliability engineering is, I spoke to Nat Welch, an SRE currently working at First Look Media, whose experience includes time at Google and Hillary Clinton's 2016 presidential campaign. Nat has just published a book with Packt called Real-World SRE. You can find it here. Follow Nat on Twitter: @icco What is site reliability engineering? Nat Welch: The idea [of site reliability engineering] is to write and modify software to improve the reliability of a website or system. As a term and field, it was founded by Google in the early 2000s, and has slowly spread across the rest of the industry. Having engineers dedicated to global system health and reliability, working with every layer of the business to improving reliability for systems. "By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams." Why do we need site reliability engineering? Nat Welch: Customers get mad if your website is down. Engineers often were having trouble weighing system reliability work versus new feature work. Because of this, product feature work often takes priority, and reliability decisions are made by guess work. By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams. Why do we need SRE now, in 2018? Nat Welch: Part of it is that people are finally starting to build systems more like how Google has been building for years (heavy use of containers, lots of services, heavily distributed). The other part is a marketing effort by Google so that they can make it easier to hire. What are the core responsibilities of an SRE? How do they sit within a team? Nat Welch: SRE is just a specialization of a developer. They sit on equal footing with the rest of the developers on the team, because the system is everyone's responsbility. But while some engineers will focus primarily on new features, SRE will primarily focus on system reliability. This does not mean either side does not work on the other (SRE often write features, product devs often write code to make the system more reliable, etc), it just means their primary focus when defining priorities is different. What are the biggest challenges for site reliability engineers? Nat Welch: Communication with everyone (product, finance, executive team, etc.), and focus - it's very easy to get lost in fire fighting. What are the 3 key skills you need to be a good SRE? Nat Welch: Communication skills, software development skills, system design skills. You need to be able to write code, review code, work with others, break large projects into small pieces and distribute the work among people, but you also need to be able to take a system (working or broken) and figure out how it is designed and how it works. Thanks Nat! Site reliability engineering, then, is a response to a broader change in the types of software infrastructure we are building and using today. It's certainly a role that offers a lot of scope for ambitious and curious developers interested in a range of problems in software development, from UX to security. If you want to learn more, take a look at Nat's book.
Read more
  • 0
  • 0
  • 4238

article-image-interview-with-marco-matic-marco-ryan-augmented-reality-artist
Sugandha Lahoti
18 Sep 2018
11 min read
Save for later

“As Artists we should be constantly evolving our technical skills and thought processes to push the boundaries on what's achievable,” Marco Matic Ryan, Augmented Reality Artist

Sugandha Lahoti
18 Sep 2018
11 min read
Augmented and Virtual Reality is taking on the world, one AR app at a time. Almost all tech giants have their own AR platforms, whether it be Google’s ARCore, to Apple’s ARKit, to Snapchat’s Lens Studio. Not just that, there are now various AR galleries and exhibits focused solely on AR, where designers and artists showcase their artistic talents combined with rich augmented reality experiences. While searching for designers and artists working the augmented reality space, we stumbled upon Marc-O-Matic aka Marco Ryan. We were fascinated by his artwork and wanted to interview him right away.   [embed]https://vimeo.com/219045989[/embed] As his website bio says, “Marco is a multidisciplinary Artist, Animator, Director, Storyteller and Technologist working across Augmented and Virtual Reality technologies.” He shared with us his experiences working with Augmented Reality and told us about his creation process, his current projects, tips and tricks for a budding artist venturing into the AR space and his views on merging technology and art. Key Takeaways A well-rounded creative is someone who could do everything from creating the art, the associated stories as well as execute the technical aspects of the work to make it come to life. The future belongs to these creative types who love to learn and experiment every day. An artist must consider three aspects before taking on a project. First, how to tell their story in a more engaging and interesting manner. Second, how to combine different skill sets together and third the message their work conveys. Augmented Reality has added a new level of depth to the art experience not just for artists but also for viewers. Everyone has a smartphone these days and with AR you can add many added elements to an art piece. You can add sound, motion and 3D elements to the experience, which affect more of your senses. It is easy for a beginner artist to get started with creating Augmented reality art. You need to start with learning a basic language (mostly C# or Javascript) and then you can learn more as you explore. Tools and platforms such as Adobe After Effects, Maya and Blend, are good for building shaders, materials or effects. Unity 3D is a popular choice for implementing the AR functionality, and over 91% of HoloLens experiences are made with Unity. It supports a large number of devices and also has a large community base. Unity offers highly optimized rendering pipeline and rapid iteration capabilities to create realist AR VR experiences. AI and Machine Learning are at the forefront of innovating AR/VR. Machine learning algorithms can be used in character modeling. They can map your facial movements directly to a 3D character’s face. AI can make a character emotionally react based on the tone of the audio and also automate tasks allowing an artist to focus solely on the creative aspects. Full Interview How did you get started in AR. How did you journey from being an artist to an AR artist began? I started experimenting with Immersive AR/VR tech about 2-3 years ago? At the time buzzwords like virtual and augmented reality started trending and it had me curious to see how I could implement my own existing skills into this area at the time. Prior to getting into AR and VR I came from a largely self-taught background in Art & Illustration which then evolved into exploring film and animation and then finally led to game design and programming. Exploring all these areas allowed me to become a well-rounded creative, meaning I could do everything from creating the art, the associated stories as well as execute the technical aspects of the work to make it come to life. That’s quite impressive and fascinating to know! What programming language did you learn first and why? What was your approach to learning the language? Do you also dabble in other languages? How can artists overcome the fear of learning technology? Working with the game engine Unity 3D to create my VR and AR Experiences, the engine allows you to program in either C# or Javascript. C# was the the most prevalent language of the two and for me I found it easier to pick up. As a visual learner, I initially came to understand the language through node based programming tools like Playmaker and Uscript. These plug-ins for Unity are great for beginners as they allow you to visually program behaviors and functionality into your experience by creating interconnected node trees or maps which then generates the code. I’m familiar with this form of logic building as other programs I use, such as Adobe After Effects, Maya and Blend, use similar systems for building shaders, materials or effects. Through node based programming or visual scripting, you can take a look under the hood and understand the syntax iteratively. It’s like understanding a language through reverse-engineering. By seeing the code being generated based on the node maps you create you could quickly understand how the language is structured. I try to think of learning new things as adding skills to your arsenal and the more skills you possess the greater you can expand on your ideas. I don’t think it’s so much a fear of learning new technology. I think it’s more a question of ‘Will learning this new technology be worth my time and benefit my current practice?’ The beautiful age we live in makes it so much easier to learn new skills online. So there’s so much support already available for anyone wanting to explore game design technologies in their creative practice. What tools do you use in the process and why those tools? What is the actual process you follow? What are the various steps? The Augmented Reality Art I created has two parts to them: Creating the Art Everything is drawn on paper first using your typical office ballpoint pens, inkwash and sometimes watercolours or gouache. I prefer creating my work that way. To me keeping that 'hand-crafted' style and aesthetic has its charm when bringing it to life through Augmented Reality. I think it's really important to demonstrate the importance of other traditional mediums, techniques and disciplines when working with immersive technologies as they're still very valid in creating engaging content. I see a lot of AR experiences that have a lack of charm and feel somewhat 'gamey' which is why I want to continue integrating more raw-looking aesthetics in this area. Augmented Representation & Implementation Once the art is planned and created, I then scan it and start splicing it up digitally. From the preserved textures and linework I then create a 3D representation of the artwork through 3D modeling and animation software called Blender. It's one of many 3D authoring tools out there. It not only packs a significant number of features, it's also free which makes it ideal for beginners and students. Once the 3D representation of the work is created it's then imported into a game engine called Unity3D where I implement the AR functionality. Unity 3D is a widely used game engine. It's one of many engines out there but what makes it great is its support to deploy to all manners of devices. It also has a large community base behind it should you need help. How long does a typical project take? Do you work by yourself or with a team? How long do you work? On average an Augmented Artwork may take anywhere from 3 to 4 weeks to create, which includes the Art, Animation, Modeling and AR Implementation. When I first started out it'd take much longer but overtime I've streamlined my own processes to get things done faster. My alias Marc-O-Matic tends to be mistaken as a team but I'm actually just one person. I much prefer being able to create and direct everything myself. What is your advice to a budding artist who is interested in learning tech to produce art? From my experience, don't confine yourself to one specific medium and try practising a different skill or technique everyday. As artists we should be constantly evolving not just our technical skills but also our thought processes to push the boundaries on what's achievable. 'How can I tell my story in a more engaging and interesting manner?' 'What can I create if I combine these different skill sets together?" "How do I bring an end to systematic racism and white privilege?' etc. The more knowledge you have the greater you can push the boundaries of your creative expression. How do you support yourself financially? As in do you also sell your art pieces or make art for clients etc.? Where do you sell stuff, and meet people? I work under my own Artist Entity 'Marc-O-Matic.' I much prefer working independently than within larger studios or agencies. I juggle a balance between commercial work and personal projects. There's generally a high demand for Augmented/Virtual Reality experiences and I'm really lucky and grateful that clients want content in my particular style. I work with a variety of organisations generally consulting, providing creative direction and in most cases building the experiences too. Aside from the commercial work, I'm currently touring my own Augmented Reality Art/Storytelling collection called 'Moving Marvels' where audiences get to see my illustrated works come to life right in front of them. It's also how I sell various limited edition prints of my augmented artworks. My collection is exhibited at tech/innovation conferences, symposiums, galleries and even at universities/educational institutes. It's a great way to make connections and demonstrate what immersive technologies can do in a creative capacity. No interview would be complete without a discussion on AI and machine learning. As a technophile artist, are you excited about merging AI and art in new ways? What are some of your hopes and fears about this new tech? Do you plan to try machine learning anytime soon? It’s like a double edged sword with + 5 to Creativity. Technology can enhance our creative abilities and processes in a number of ways. At the same time it can also make some of us lazy because we can become so reliant on it. Like any new technology that aims to assist in the creative process, there’s always the fear that technology will make creatives lazy or will replace professions altogether. In some ways technology has done this but from saying that it has also created new job opportunities and abilities for artists. In areas like character animation for example the assistance of machine learning algorithms means creatives can worry less about the laborious physical processes of rigging and complex animation and focus more on the storytelling. For example, we can significantly reduce production times in the areas of facial rigging through facial recognition. Through learning the behaviour and structure of your own face, machine learning algorithms can map your facial movements directly to a 3D character’s face. By simply recording your own facial movements and gestures, you’ve created an entire impression map for your 3D character to use. What’s also crazy is, on top of that, by running a voice over track on that 3D character, you can also train it to play it out to sync with the voice AS well as have the entire face emotionally react based on the tone of the audio. It’s game-changing but also terrifying stuff as this sort of technology can be used to create highly realistic fake impressions of real people. As also an animator, I’ve started experimenting with this technology for my own VR animations. What would take me animator hours or even days to animate a talking 3D character can now take mere minutes. Author Bio Marc-O-Matic is the moniker of Marco Matic Ryan. He is a multidisciplinary Artist, Animator, Director, Storyteller and Technologist working across Augmented and Virtual Reality technologies. He is based in Victoria, Australia. https://player.vimeo.com/video/125218939 Unity plugins for augmented reality application development. Google ARCore is pushing immersive computing forward. There’s another player in the advertising game: augmented reality.
Read more
  • 0
  • 0
  • 4903
article-image-deep-meta-reinforcement-learning-will-be-the-future-of-ai-where-we-will-be-so-close-to-achieving-artificial-general-intelligence-agi-sudharsan-ravichandiran
Sunith Shetty
13 Sep 2018
9 min read
Save for later

“Deep meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI)”, Sudharsan Ravichandiran

Sunith Shetty
13 Sep 2018
9 min read
Mckinsey report predicts that artificial intelligence techniques including deep learning and reinforcement learning have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. Reinforcement learning (RL) is an increasingly popular technique for enterprises that deal with large complex problem spaces. It enables the agents to learn from their own actions and experiences. When working in an interactive environment, they use a trial and error process to find the best-optimized result. Reinforcement learning is at the cutting-edge right now and it's finally reached a point that it can be applied to real-world industrial systems. We recently interviewed Sudharsan Ravichandiran, a data scientist at param.ai, and the author of the book, Hands-On Reinforcement Learning with Python. Sudharsan takes us on an insightful journey explaining to us why reinforcement learning is trending and becoming so popular lately. He talks about the positive contributions of RL in various research fields such as gaming industry, robotics, inventory management, manufacturing, and finance. Author’s Bio Sudharsan Ravichandiran, author of the book, Hands-On Reinforcement Learning with Python is a data scientist, researcher, and YouTuber. His area of research focuses on practical implementations of deep learning and reinforcement learning, which includes natural language processing and computer vision. He used to be a freelance web developer and designer and has designed award-winning websites. He is an open source contributor and loves answering questions on Stack Overflow. You can follow his open source contributions on GitHub. Key Takeaways Reinforcement learning adoption among the community has increased exponentially because of the augmentation of reinforcement learning with state of the art deep learning algorithms. It is extensively used in the Gaming industry, robotics, Inventory management, and Finance. You can see more and more research papers and applications leading to full-fledged self-learning agents. One of the common challenges faced in RL is safe exploration. To avoid this problem, one can use imitation learning (learning from human demonstration) to provide the best-optimized solution. Deep meta reinforcement learning will be the future of artificial intelligence where we will implement artificial general intelligence (AGI) to build a single model to master a wide variety of tasks. Thus each model will be capable to perform a wide range of complex tasks. Sudharsan suggests the readers should learn to code the algorithms from scratch instead of using libraries. It will help them understand and implement complex concepts in their research work or projects far better. Full Interview Reinforcement learning is at the cutting-edge right now, with many of the world’s best researchers working on improving the core algorithms. What do you think is the reason behind RL success and why RL is getting so popular lately? Reinforcement learning has been around for many years, the reason it is so popular right now is because it is possible to augment reinforcement learning with state of the art deep learning algorithms. With deep reinforcement learning, researchers have obtained better results. Specifically, reinforcement learning started to grow on a massive scale after the reinforcement learning agent, AlphaGo, won over the world champion in a board game called AlphaGo. Also, Deep reinforcement learning algorithms help us in taking a closer step towards artificial general intelligence which is the true AI. Reinforcement learning is a pretty complex topic to wrap your head around, what got you into RL field? What keeps you motivated to keep on working on these complex research problems? I used to be a freelance web developer during my university days. I had a paper called Artificial Intelligence on my Spring semester, it really got me intrigued and made me want to explore more about the field. Later on, I got invited to Microsoft data science conference where I met many experts and learned more about the field way better. All these got me intriguing and made me to venture into AI. The one thing which motivates and keeps me excited are the advancements happening in the field of reinforcement learning lately. DeepMind and OpenAI are doing a great job and massively contributing to the RL community. Recent advancements like human-like robot hand control to manipulate physical objects with unprecedented dexterity, imagination augmented agents which can imagine and makes decisions, world models where the agents have the ability to dream excite me and keeps me going. Can you please list down 3 popular problem areas where RL is majorly used? Also, what are the 3 most current challenges faced while implementing RL in real-life? As a developer/researcher how you are gearing up to solve them? RL is predominantly used in the Gaming industry, robotics, and Inventory management. There are several challenges in Reinforcement learning. For instance, safe exploration. Reinforcement learning is basically a trial and error process where agents try several actions to find the best and optimal action. Consider an agent learning to navigate/learning to drive a car. Agents don't know which action is better unless they try them. The agent also has to be careful in not selecting actions which are harmful to others or itself, say, for example, colliding with other vehicles. To avoid this problem, we can use imitation learning or learning from a human demonstration where the agents learn directly from the human supervisor. Apart from these, there are various evolutionary strategies used to solve the challenges faced in RL. There are few positive developments in RL happened from Open AI and DeepMind team that have got widely adopted both in research and in real-world applications. What are some cutting-edge techniques you foresee getting public attention in RL in 2018 and in the near future?   Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence. Gaming and robotics or simulations are the two popular domains where reinforcement learning is extensively used. In what other domains does RL find important use cases and how? Manufacturing In manufacturing, intelligent robots are used to place objects in the right position. If it fails or succeeds in placing the object in the right position, it remembers the action and trains itself to do this with greater accuracy. The use of intelligent agents will reduce labor costs and result in better performance. Inventory management RL is extensively used in inventory management, which is a crucial business activity. Some of these activities include supply chain management, demand forecasting, and handling several warehouse operations (such as placing products in warehouses for managing space efficiently). Infrastructure management RL is also used in infrastructure management. For an instance, Google researchers in DeepMind have developed RL algorithms for efficiently reducing the energy consumption in their own data center. Finance RL is widely used in financial portfolio management, which is the process of constant redistribution of a fund into different financial products and also in predicting and trading in commercial transactions markets. JP Morgan has successfully used RL to provide better trade execution results for large orders. Your recently published ‘Hands-On Reinforcement Learning with Python‘ has received a very positive response from the readers. What are some key challenges in learning reinforcement learning and how does your book help them? One of the key challenges in learning reinforcement learning is the lack of intuitive examples and poor understanding of RL fundamentals with required math. The book addresses all the challenges by explaining all the reinforcement learning concepts from scratch and gradually takes readers to advanced concepts by exploring them one at a time. The book also explains all the required math step by step intuitively along with plenty of examples. My intention behind adding multiple examples and code to each chapter was to help the readers understand the concepts better. This will also help them in understanding when to apply a particular algorithm. This book also works as a perfect reference for beginners who are new to reinforcement learning. Are there any prerequisites needed to get the most out of the book? What do you think they should keep in mind while developing their own self-learning agents? Readers who are familiar with machine learning and Python basics can easily follow the book. The book starts with explaining reinforcement learning fundamentals and reinforcement learning algorithms with applications and then it takes the reader in understanding deep learning algorithms followed by the book explaining advanced deep reinforcement learning algorithms. While creating self-learning agents, one should be careful in designing reward and goal functions. What in your opinion are the 3-5 major takeaways from your book? The book serves as a solid go-to place for someone who wants to venture into deep reinforcement learning. The book is completely beginner friendly and takes the readers to the advanced concepts gradually. At the end of the book, the readers can master reinforcement learning, deep learning and deep reinforcement learning along with their applications in TensorFlow and all the required math. Would you like to add anything more to our readers? I would suggest the readers code the algorithms from scratch instead of using libraries, it will help them in understanding the concepts far better. I also would like to thank each and every reader for making this book a huge success. My best wishes to them for their reinforcement learning projects. If you found this interview to be interesting, make sure you check out other insightful articles on reinforcement learning: Top 5 tools for reinforcement learning This self-driving car can drive in its imagination using deep reinforcement learning Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google OpenAI builds reinforcement learning based system giving robots human like dexterity DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help
Read more
  • 0
  • 0
  • 6056

article-image-what-should-we-watch-tonight-ask-a-robot-says-matt-jones-from-ovo-mobile
Neil Aitken
18 Aug 2018
11 min read
Save for later

What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]

Neil Aitken
18 Aug 2018
11 min read
Netflix, the global poster child for streamed TV and the use of Big Data to inform the programs they develop, has shown steady customer growth for several years now. Recently, the company revealed that it would be shutting down the user reviews which have been so prominent in their media catalogue interface for so long. In the background, media and telco are merging. AT&T, the telco which undertook the biggest deal in history recently, acquired Time and wants HBO to become like Netflix. Telia, a Finnish telecommunications company bought Bonnier Broadcasting in late July 2018. The video content landscape has changed a great deal in the last decade. Everyone in the entertainment game wants to move beyond broadcast TV and to use data to develop content their users will love and which will give their customer base more variety. This means they can look to data to charge higher subscription rates per user, experiment with tiered subscriptions, decide to localize global content, globalize local content and more. These changes raise two key questions. First, are we heading for a world in which AI and ML based algorithms drive what we watch on TV? And second, are the days of human recommendation being quietly replaced by machine recommendations over which the user has no control? [caption id="attachment_21726" align="aligncenter" width="1392"] As you know, Netflix is acquiring customers fast.[/caption] Source: Statista To get an insider’s view on the answer to those questions, I sat down with Matt Jones of OVO Mobile, one of Australia’s fastest growing telecommunications companies. OVO offer their customers a unique point of difference – streaming video sports content, included in a phone plan. OVO has bought the rights to a number of niche sports in Australia which weren’t previously available and now offer free OTA (Over the Air) digital content for fans of ‘unusual’ sports like Drag Racing or Gymnastics. OTA content is anything delivered to a user’s phone over a wireless network. In OVO’s case, the data used to transport the video content they provide to their users is free. That means customers don’t have to worry about paying more for mobile data so they can watch it – a key concern for users. OVO Mobile and Netflix are in very similar businesses – and Matt has a unique point of view about how Artificial Intelligence and Machine Learning will impact the world of telco and media. Key takeaways What’s changed our media consumption habits: the ubiquitous mobile internet, the always on and connected younger generation, better mobile hardware, improved network performance and capabilities, need for control over content choices. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. The use of AI in digital/app experience has changed in a way to personalize what users can see which old media could not offer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to Contribution of AI / ML towards the delivery of online media is endless in terms of personalisation, context awareness, notification management etc. Social acceptance of media delivered to users on mobile phones is what’s driving change A number of overlapping factors are driving changes in how we engage with content. Social acceptance of the internet and mobile access to it as a core part of life is one key enabler. From a technology perspective, things have changed too. Smartphones now have bigger, higher resolution screens than ever before – and they’re with us all the time. Jones believes this change is part of a cultural evolution in how we relate to technology. He says, “There has also been a generational shift which has taken place. Younger people are used to the small screen being the primary device. They’re all about control, seeking out their interests and consuming these, as opposed to previous generations which was used to mass content distribution from traditional channels like TV.” Other factors include network performance and capability which has improved dramatically in recent years. Data speeds have grown exponentially from 3G networks – launched less than 15 years ago, which could support stuttered low resolution video to 4G and 4.5G enabled networks. These can now support live streaming of High Definition TV. Mobile data allowances in plans and offers from some phone companies to provide some content ‘data free’ (as OVO does with theirs) have also driven uptake. Finally, people want convenience and digital offers that in a way people have never experienced before. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. What part can AI / machine learning play in the delivery of media online? Artificial Intelligence (AI) is already part of 85% of our online interactions. Gartner suggest, it will be part of every product in the future. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. When you find a new band in Spotify, when YouTube recommends a funny video you’ll like, when Amazon show you other products that you might like to consider alongside the one you just put in to your basket, that’s AI working to improve your experience. “Over The Top content is exploding. Content owners are going direct to consumer and providing fantastic experiences for their users. What’s changing is the use of AI in digital / app experiences to personalize what users see in ways old media never could.” Says Matt. Matt’s video content recommendation app, for example, ‘learns’ not just what you like to watch but also the times you are most likely to watch it. It then prompts users with a short video to entice them to watch. And the analytics available show just how effective it is. Matt’s app can be up to 5 times more successful at encouraging customers to watch his content, than those who don’t use it. “The list of ways that AI / ML contributes to the delivery of media online is endless. Personalisation, context awareness, notification management …. Endless” By offering users recommendations on content they’ll love, producers can now engage more customers for longer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to: Personalise at volume: Apps used to deliver content can personalise what’s shown first to users, based on a number of variables known about them, including the sort of context awareness that can be relatively easy to find on mobile devices. Ultimately, every AI customer experience improvement (including the examples that follow) are all designed to automate the process of providing something special to each individual that they uniquely want. Automation means that can be done at scale, with every customer treated uniquely. Notification management: AI that tracks the success of notifications and acknowledges, critically, when they are not helpful to the user, can be employed to alert users only about things they want to know. These AI solutions provide updates to users based on their preferences and avoid the provision of irrelevant information. Content discovery & Re- engagement: AI and ML can be used in the provision of recommendations as to what users could watch, which expose customers to content they would not otherwise find, but which they are likely to value. Better / more relevant advertising: Advertising which targets a legitimately held, real, customer need is actually useful to viewers. Better analytics for AI can assist in targeting micro segments with ads which contain information customers will value. Lattice, is a Business Insights tool provider. Their ‘Lattice Engine’ product combined information held in multiple cloud based locations and uses AI to automatically assign customers to a segment which suits them. Those data are then provided to a customer’s eCommerce site and other channel interactions, and used to offer content which will help them convert better. Developing better segments: Raw data on real customers can be gathered from digital content systems to inform Above The Line marketing in the real, non digital world. Big data analytics can now be used with accurate segmentation for local area marketing and to tie together digital and retail customer experiences. McKinsey suggest that 36% of companies are actively pursuing strategies, driven from their Big Data reserves. They advise their clients that Big Data can be used to better understand and grow Customer Lifetime Values. In the future - Deep linking for calls-to-action: Some digital content is provided in a form such that viewers can find out more information about an item on screen. Providing a way to deep link from a video screen in to a shopping cart prepopulated with something just seen on screen is an exciting possibility for the future. Cutting steps out of the buying process to make it easier for eCommerce users to transact from within content apps to buying a product they’ve seen on the screen is likely to become a big business. Deep linking raises the value of the content shown to the degree it raise the sales of the products included. Bringing it all together Jones believes those that invest big in AI and machine learning, and of them, those who find a way to draw out insights and act upon them, will be the ultimate victors. “The big winners are going to be the people who connect a fan with content they love and use AI and ML to deliver the best possible experience. It’s about using all the information you have about your users and acting on them.” Said Jones. That commercial incentive is already driving behavior. AI and ML drive already provide personalized content recommendations. Progressive content companies, including Matt’s, are already working on building AI in to every facet of every Digital experience you have. As to whether AI is entirely replacing social media influence, I don’t think that’s the case. The research says people are still 4 times more likely to watch a video if it is recommended to them by a friend. Reviews have always been important to presales on the internet and that applies to TV shows, too. People want to know what real users felt when they used a product. If they can’t get reviews from Netflix, they will simply open a new tab and google for reviews in that while they are thinking of how to find something to watch on Netflix. About Matt Jones, Matt is an industry disruptor, launching the first of its kind Media and Telco brand OVO Mobile in 2015, Matt is the driving force behind convergence of new media & telco – by bringing together Telecommunications with Media Rights and digital broadcast for mass distribution. OVO is a new type of Telco, delivering content that fans are passionate about, streamed live on their mobile or tablet UNLIMITED & data free. OVO has secured exclusive 3 year+ digital broadcast and distribution rights for a range of content owners including Supercars, World Superbikes, 400 Thunder Drag Series, Audi Australia Racing & Gymnastics Australia – with a combined Australian audience estimated at over 7 Million. OVO is a multi-award winner, including winning the Money Magazine Best of the Best Award 2017 for high usage, as well as featuring on A Current Affair, Sunrise, The Today Show, Channel 7 News, Channel 9 News and multiple radio shows for their world-first kids’ mobile phone plan with built-in cyber security protection. As OVO CEO, Matt was nominated for Start-Up Executive of the Year at the CEO Magazine Awards 2017 and was awarded runner-up. The Award recognises the achievements of leaders and professionals, and the contributions they have made to their companies across industry-specific categories. Matt holds a Bachelor of Arts (BA) from the University of Tasmania and regularly speaks at Telco, Sports Marketing and Media forums and events. Matt has held executive leadership roles at leading Telecommunications brands including Telstra (Head of Strategy – Operations), Optus, Vodafone, AAPT, Telecom New Zealand as well as global Management Consulting firms including BearingPoint. Matt lives on the northern beaches of Sydney with his wife Mel and daughters Charlotte and Lucy. How to earn $1m per year? Hint: Learn machine learning We must change how we think about AI, urge AI founding fathers Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 3484