Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

121 Articles
article-image-listen-how-activestate-is-tackling-dependency-hell-by-providing-enterprise-level-support-for-open-source-programming-languages-podcast
Richard Gall
08 Oct 2019
2 min read
Save for later

Listen: How ActiveState is tackling "dependency hell" by providing enterprise-level support for open source programming languages [Podcast]

Richard Gall
08 Oct 2019
2 min read
"Open source back in the late nineties - and even throughout the 2000s - was really hard to use," ActiveState CEO Bart Copeland says. "Our job," he continues, "was to make it much easier for developers to use open source and much easier for enterprises to use open source." How does ActiveState work? But how does ActiveState actually do this? Copeland explains: "ActiveState is exactly like Red Hat. So what Red Hat did to Linux - providing enterprise-grade Linux distributions - ActiveState does for open source programming languages." Clearly ActiveState is an interesting product that's playing an important part in helping enterprises to better manage the widespread migration to open source technology. For the latest edition of the Packt Podcast we spoke to Copeland about ActiveState and the growth of open source over the last decade. We think you'll find what he has to say interesting... Listen: https://soundcloud.com/packt-podcasts/activestate-making-open-source-more-accessible-for-the-enterprise-interview-with-bart-copeland   Read next: Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? Key quotes from Bart Copeland Copeland on the relationship between enterprise management and developers: "If you look at the enterprise… they want to make sure that it works and it doesn’t cause security threats and their in compliance with all the licenses. And the result is, due to the complexities of open source, management within the enterprise will often limit developers on what languages and what open source stacks they can use because the more stacks you have, the more complexity you have in an organization." Copeland on developer freedom: "A developer is a very technical and creative individual and they want to be able to use the right tools to build the right solution. And so if a developer is handcuffed to certain technology stacks, they may not be able to use the best technology to solve the problem." Learn more about ActiveState here.
Read more
  • 0
  • 0
  • 3296

article-image-devsecops-and-the-shift-left-in-security-how-semmle-is-supporting-software-developers-podcast
Richard Gall
11 Nov 2019
2 min read
Save for later

DevSecOps and the shift left in security: how Semmle is supporting software developers [Podcast]

Richard Gall
11 Nov 2019
2 min read
Software security has been 'shifting left' in recent years. Thanks to movements like Agile and Dev(Sec)Ops, software developers are finding that they have to take more responsibility for the security of their code. By moving performance and security testing earlier in the development lifecycle it's much easier to identify and capture defects and issues. The reasons for this are largely rooted in the utter dominance of open source software and the increasingly distributed nature of the systems we're building. To put it bluntly, if our software is open, and loosely connected, the opportunity for systems to be exploited by malignant actors grows vastly. To tackle this we're starting to see a wealth of platforms and tools emerge that are trying to support developers embrace security as a fundamental part of the development process. One such platform is Semmle, a code analysis platform designed to help developers and engineers identify issues quickly. To find out more about Semmle - and the wider DevSecOps movement - we spoke to Chief Security Officer Fermin Serna in an edition of the Packt Podcast. He explained how Semmle works, what its trying to achieve, and placed it in the broader context of this 'shift left' that's quickly becoming a new reality for many engineers. Listen to the episode: https://soundcloud.com/packt-podcasts/we-need-to-democratize-security-how-semmle-is-improving-open-source-security   To learn more about Semmle, visit its website here. You can also follow Fermin Serna on Twitter: @fjserna. Read next:  5 reasons poor communication can sink DevSecOps How Chaos Engineering can help predict and prevent cyber-attacks preemptively
Read more
  • 0
  • 0
  • 3112

article-image-listen-walmart-labs-director-of-engineering-vilas-veeraraghavan-talks-to-us-about-building-for-resiliency-at-one-of-the-biggest-retailers-on-the-planet-podcast
Richard Gall
04 Jun 2019
2 min read
Save for later

Listen: Walmart Labs Director of Engineering Vilas Veeraraghavan talks to us about building for resiliency at one of the biggest retailers on the planet [Podcast]

Richard Gall
04 Jun 2019
2 min read
As software systems become more distributed, reliability and resiliency have become more and more important. This is one of the reasons why we've seen the emergence of chaos engineering - unreliability causes downtime which, in turn, also causes downtime. And downtime costs money. The impact of downtime is particularly significant for huge organizations that depend on the resilience and reliability of their platforms and applications. Take Uber - not only does the simplicity of the user experience hide its astonishing complexity, but it also has to ensure it can manage that complexity in a way that's reliable. A ride-hailing app couldn't be anywhere near as successful as Uber if it didn't work even if it had 1% downtime. Building resilient software is difficult But actually building resilient systems is difficult. We've recently seen how Uber uses distributed tracing to build more observable systems which can help improve reliability and resiliency in the last podcast episode with Yuri Shkuro but in this week's podcast we're diving even deeper into resiliency with Vilas Veeraraghavan, who's Director of Engineering at Walmart Labs. Vilas has experience at Netflix, the company where chaos engineering originated, but at Walmart, he's been playing a central role in bringing a more evolved version of chaos engineering - which Vilas calls resiliency engineering - to the organization. In this episode we discuss: Whether chaos engineering and resiliency engineering are for everyone Cultural challenges How to get buy-in Getting tooling right https://soundcloud.com/packt-podcasts/walmart-labs-director-of-engineering-vilas-veeraraghavan-on-chaos-engineering-resiliency   “You do not want to get up in the middle of the night get on the call with the VP of engineering and blurt out saying I have no idea what happened. Your answer should be I know exactly what happened because we have tested this exact scenario multiple times. We developed a recipe for it, and here is what we can do… that gives you as an engineer, the power to be able to stand up and say I know exactly what’s going on, I’ll fix it, don’t worry, we’re not going to cause an outage.”
Read more
  • 0
  • 0
  • 3107

article-image-francesco-marchioni-on-quarkus-1-0-and-how-red-hat-increases-the-efficiency-of-cloud-native-applications-interview
Vincy Davis
19 Dec 2019
11 min read
Save for later

Francesco Marchioni on Quarkus 1.0 and how Red Hat increases the efficiency of Cloud-Native applications [Interview]

Vincy Davis
19 Dec 2019
11 min read
Cloud-native applications are an assembly of independent services used to build new applications, optimize existing ones, and connect them in such a way that the applications can skillfully deliver the desired result. More specifically, they are employed to build scalable and fault-tolerant applications in public, private, or hybrid clouds.  Launched in March this year, Quarkus, a new Kubernetes-native framework launched its first stable version, Quarkus 1.0 last month. Quarkus allows Java developers to combine the power of containers, microservices, and cloud-native to build reliable applications. To get a more clear understanding of Cloud-Native Applications with Java and Quarkus, we interviewed Francesco Marchioni, a Red Hat Certified JBoss Administrator (RHCJA) and Sun Certified Enterprise Architect (SCEA) working at Red Hat. Francesco is the author of the book ‘Hands-On Cloud-Native Applications with Java and Quarkus’.  Francesco on Quarkus 1.0 and how Quarkus is bringing Java into the modern microservices and serverless modes of developing Quarkus is coming up with its first stable version Quarkus 1.0 at the end of this month. It is expected to have features like a new reactive core based on Vert.x, a non-blocking security layer, and a new Quarkus ecosystem called ‘universe’. What are you most excited about in Quarkus 1.0? What are your favorite features in Quarkus? One of my favorite features of Quarkus is the reactive core ecosystem which supports both reactive and imperative programming models, letting Quarkus handle the execution model switch for you. This is one of the biggest gains you will enjoy when moving from a monolithic core, which is inherently based on synchronous executions, to a reactive environment that follows events and not just a loop of instructions. I also consider of immense value that the foundation of Quarkus API is a well-known set of APIs that I was already skilled with, therefore I could ramp up and write a book about it in less than one year! How does the Quarkus Java framework compare with Spring? How do you think the Spring API compatibility in Quarkus 1.0 will help developers? Both Quarkus and Spring boot offer a powerful stack of technologies and tools to build Java applications. In general terms, Quarkus inherits its core features from the Java EE, with CDI and JAX-RS being the most evident example. On the other hand, Spring boot follows an alternative modular architecture based on the Spring core. In terms of Microservices, they also differ as Quarkus leverages the Microprofile API while Spring Boot relies on Spring Boot Actuator and Netflix Hystrix. Besides the different stacks, Quarkus has some unique features available out of the box such as Build time class initialization, Kubernetes resources generation and GraalVM native images support. Although there are no official benchmarks, in the typical case of a REST Service built with Quarkus, you can observe an RSS memory reduction to half and a 5x increase in boot speed. In terms of compatibility, it's worth mentioning that, while users are encouraged to use CDI annotations for your applications, Quarkus provides a compatibility layer for Spring dependency injection (e.g. @Autowired) in the form of the spring-di extension. Quarkus is tailored for GraalVM and crafted by best-of-breed Java libraries and standards. How do you think Quarkus brings Java into the modern microservices and serverless modes of developing? Also, why do you think Java continues to be a top programming language for back-end enterprise developers? Although native code execution, in combination with GraalVM, Quarkus is an amazing opportunity for Java. I mean I wouldn't say Quarkus is just native centric as it immediately buys to Java developers an RSS memory reduction to about half, an increase in boot speed, top Garbage Collector performance, plus a set of libraries that are tailored for the JDK. This makes Java a first-class citizen in the microservices ecosystem and I bet it will continue to be one of the top programming languages still for many years. On how his book will benefit Java developers and architects In your book “Hands-On Cloud-Native Applications with Java and Quarkus” you have demonstrated advanced application development techniques such as Reactive Programming, Message Streaming, Advanced configuration hacks. Apart from these, what are the other techniques that can be used for managing advanced application development in Quarkus? Also, apart from the use cases in your book, what other areas/domains can you use Quarkus? In terms of configuration, a whole chapter of the book explores the advanced configuration options which are derived from the MicroProfile config API and the Applications’ profile management, which is a convenient way to shift the configuration options from one environment to another- think for example how easy can be with Quarkus to switch from a Production DB to a Development or Test Database. Besides the use cases discussed in the book, I’d say Quarkus is rather polyvalent, based on the number of extensions that are already available. For example, you can easily extend the example provided in the last chapter, which is about Streaming Data, with advanced transformation patterns and routes provided by the camel extension, thus leveraging the most common integration scenarios. What does your book aim to share with readers? Who will benefit the most from your book? How will your book help Java developers and architects in understanding the microservice architecture? This book is a log of my journey through the Quarkus Land which started exactly one year ago, at its very first internal preview by our engineers. Therefore my first aim is to ignite the same passion to the readers, whatever is their "maturity level" in the IT. I believe developers and architects from the Java Enterprise trenches will enjoy the fastest path to learning Quarkus as many extensions are pretty much the same they have been using for years. Nevertheless, I believe any young developer with a passion for learning can quickly get on board and become proficient with Quarkus by the end of this book. One advantage of younger developers over seasoned ones, like me, is that it will be easier for them to start thinking in terms of services instead of building up monolithic giant applications like we used to do for years. Although microservices patterns are not the main focus of this book, a lot of work has been done to demonstrate how to connect services and not just how to build them up. On how Red Hat uses Quarkus in its products and service Red Hat is already using Quarkus in their products and services. How is it helping Red Hat in increasing the efficiency of your Cloud-Native applications? To be precise, Quarkus is not yet a Red Hat supported Product, but it has already reached an important milestone with the release Quarkus 1.0 final, so it will definitely be included in the list of our supported products, according to our internal productization road-map. That being said, Red Hat is working in increasing the efficiency of your Cloud-Native applications in several ways through a combination of practices, technologies, processes that can be summarized in the following steps that will eventually lead to cloud-native application success: Evolve a DevOps culture and practices to embrace new technology through tighter collaboration. Speed up existing, monolithic applications with simple migration processes that will eventually lead to microservices or mini services. Use ready-to-use developer tools such as application services, to speed up the development of business logic. Openshift tools (web and CLI) is an example of it. Choose the right tool for the right application by using a container-based application platform that supports a large mix of frameworks, languages, and architectures. Provide self-service, on-demand infrastructure for developers using containers and container orchestration technology to simplify access to the underlying infrastructure, give control and visibility to IT operations, and provide application lifecycle management across environments. Automate IT to accelerate application delivery using clear service requirements definition, self-service catalogs that empower users (such as the Container catalog) and metering, monitoring of runtime processes. Implement continuous delivery and advanced deployment techniques to accelerate the delivery of your cloud-native applications. Evolve your applications into a modular architecture by choosing a design that fits your specific needs, such as microservices, a monolith-first approach, or mini services. On Quarkus’ cloud-native security and its competitors Cloud-native applications provide customers with a better time-to-market strategy and also allows them to build, more robust, resilient, scalable, and cost-effective applications. However, they also come with a big risk of potential security breaches. What is your take on cloud-native security for cloud-native applications? Also, what are your thoughts on future-proofing cloud applications? Traditionally, IT security was focused on hardening and the datacenter perimeter—but today, with Cloud applications, that perimeter is fading out. Public and hybrid clouds are shifting responsibility for security and regulatory compliance across the vendors. The adoption of containers at scale requires the adoption of new methods of analyzing, securing, and updating the delivery of applications. As a result, static security policies don’t scale well for containers in the enterprise but need to move to a new concept of security called "continuous container security". This includes some key aspects such as securing the container pipeline and the application, securing the container deployment environment(s) and infrastructure, integrating with enterprise security tools and meeting or enhancing existing security policies. About future-proofing of cloud applications, I believe proper planning and diligence can ensure that a company’s cloud investments withstand future change or become future-proof. It needs to be understood that new generation applications (such as apps for social, gaming and generally mobile apps) have different requirements and generate different workloads. This new generation of applications requires a substantial amount of dynamic scaling and elasticity that would be quite expensive or impossible to achieve with traditional architectures based on old data centers and bare-metal machines. Micronaut and Helidon, the other two frameworks that support GraalVM native images and target cloud-native microservices are often compared to Quarkus. In what aspects are they similar? And in what ways is Quarkus better than and/or different from the other two?   Although it is challenging to compare a set of cutting edge frameworks as some factors might vary in a middle/long term perspective, in general terms I'd say that Quarkus provides the highest level of flexibility especially if you want to combine reactive programming model with the imperative programming model. Also, Quarkus builds on the top of well-known APIs such as CDI, JAX-RS, and Microprofile API, and uses the standard "javax" namespaces to access them. Hence, the transition from the former Enterprise application is quite smooth compared with competitive products. Micronaut too has some interesting features such as support for multiple programming languages (Java, Kotlin, and Groovy the latter being exclusive of Micronaut) and a powerful Command Line Interface (CLI) to generate projects. (A CLI is not yet available in Quarkus, although there are plans to include it in the upcoming versions of it). On the other hand, Helidon is the less polyglot alternative (supports only Java right now) yet, it features a clean and simple approach to Container by providing a self-contained Dockerfile that can be built by simply calling docker build, not requiring anything locally (except the Docker tool of course). Also, the fact that Helidon plays well with GraalVM should be acknowledged as they are both official Oracle products. So, although for new projects the decision is often a matter of personal preferences and individual skills in your team, I'd say that Quarkus leverages existing Java Enterprise experience for faster results. If you want to become an expert in building Cloud-Native applications with Java and Quarkus, learn the end-to-end development guide presented in the book “Hands-On Cloud-Native Applications with Java and Quarkus”. This book will also help you in understanding a wider range of distributed application architectures to use a full-stack framework and give you a headsup on the new features in Quarkus 1.0. About the author Francesco Marchioni is a Red Hat Certified JBoss Administrator (RHCJA) and Sun Certified Enterprise Architect (SCEA) working at Red Hat in Rome, Italy. He started learning Java in 1997, and since then he has followed all the newest application program interfaces released by Sun. In 2000, he joined the JBoss community, when the application server was running the 2.X release. He has spent years as a software consultant, where he has enabled many successful software migrations from vendor platforms to open source products, such as JBoss AS, fulfilling the tight budget requirements necessitated by the current economy. Francesco also manages a blog on 'WildFly Application Server, Openshift, JBoss Projects and Enterprise Applications' focused on Java and JBoss technologies. You can reach him on Twitter and LinkedIn. RedHat’s Quarkus announces plans for Quarkus 1.0, releases its rc1  How Quarkus brings Java into the modern world of enterprise tech Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more Snyk’s JavaScript frameworks security report 2019 shares the state of security for React, Angular, and other frontend projects
Read more
  • 0
  • 0
  • 3098

article-image-fastly-svp-adam-denenberg-on-fastlys-new-edge-resources-edge-computing-fog-computing-and-more
Bhagyashree R
30 Sep 2019
9 min read
Save for later

Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more

Bhagyashree R
30 Sep 2019
9 min read
Last month, Fastly, a provider of an edge cloud platform, introduced a collection of resources to help developers learn the ins and outs of popular cloud solutions. The collection consists of step-by-step tutorials and ready-to-deploy code that developers can customize, and deploy to their Fastly configuration. We had the opportunity to interview Adam Denenberg, Fastly’s SVP of Customer Solutions, to get more insight into this particular project and other initiatives Fastly is taking to empower developers. We also grabbed this opportunity to talk to Denenberg about the emergence and growth of edge computing and fog computing and what it all means for the industry. What are the advantages of edge computing over cloud? Cloud computing is a centralized service that provides computing resources including servers, storage, databases, networking, software, analytics, and intelligence on demand. It is flexible, scalable, enables faster innovation, and has revolutionized the way people store and interact with data. However, because it is a centralized system, it can cause issues such as higher latency, limited bandwidth, security issues, and the requirement of high-speed internet connectivity. This is where edge computing comes in - to address these limitations. In essence, it’s a decentralized cloud. “Edge computing is the move to put compute power and logic as close to the end-user as possible. The edge cloud uses the emerging cloud computing serverless paradigm in which the cloud provider runs the server and dynamically manages the allocation of machine resources,” Denenberg explains. When it comes to making real-time decisions edge computing, can be very effective. He adds, “The average consumer expects speedy online experiences, so when milliseconds matter, the advantage of processing at the edge is that it is an ideal way to handle highly dynamic and time-sensitive data quickly. “In contrast, running modern applications from a central cloud poses challenges related to latency, ability to pre-scale, and cost-efficiency.” What is the difference between fog computing and edge computing? Fog computing and edge computing can appear very similar. They both involve pushing intelligence and processing capabilities closer to the origin of data. However, the difference lies in where the location of intelligence and compute power is placed. Explaining the difference between the two, Denenberg said, “fog computing, a term invented by Cisco, shares some similar design goals as edge computing, such as reducing latency to the end-user request and providing access to compute resources in a decentralized model. After that, things begin to differ.” He adds, “On the one hand, fog computing has a focus on use cases like IoT and sensors. This allows enterprises to extend their network from a central cloud closer to their devices and sensors, while maintaining a reliance on the central cloud. “Edge computing, on the other hand, is also about moving compute closer to the end-user, but doing so in a way that removes the dependency on the central cloud as much as possible. By collocating compute and storage (cache) on Fastly’s edge cloud, our customers are able to build very complex, global-scale applications and digital experiences without any dependency on a centralized compute resources.” Will edge computing replace cloud computing? A short answer to this question would be “not really.” “I don’t think anything at this moment will fully replace the central cloud,” Denenberg explains. “People said data centers were dead as soon as AWS took off, and, while we certainly saw a dramatic shift in where workloads were being run over the last decade, plenty of organizations still operate very large data centers. “There will continue to be certain workloads such as large-scale offline data processing, data warehouses, and the building of machine learning models that are much more suited to an environment that requires high compute density and long and complex processing times that operate on extremely massive data sets with no time sensitivity.” What is Fastly? Fastly’s story started back in 2008 when Artur Bergman, its founder, was working at Wikia. Three years later, he founded Fastly, headquartered in San Francisco, with its branches in four cities including London, Tokyo, New York, and Denver. Denenberg shared that Fastly’s edge cloud platform was built to address the limitations in content delivery networks (CDNs). “Fastly is an edge cloud platform built by developers, to empower developers. It came about as a result of our founder Artur Bergman's experience leading engineering at Wikia, where his passion for delivering fast, reliable, and secure online experiences for communities around the world was born. So he saw firsthand that CDNs -- which were supposed to address this problem -- weren't equipped to enable the global, real-time experiences needed in the modern era.” He further said, “To ensure a fast, reliable, and secure online experience, Fastly developed an edge cloud platform designed to provide unprecedented, real-time control, and visibility that removes traditional barriers to innovation. Knowing that developers are at the heart of building the online experience, Fastly was built to empower other developers to write and deploy code at the edge. We did this by making the platform extremely accessible, self-service, and API-first.” Fastly’s new edge cloud resources Coming to Fastly’s new edge cloud resources, Denenberg shared the motivation behind this launch. He said, “We’re here to serve the developer community and allow them to dream bigger at the edge, where we believe the future of the web will be built. This new collection of recipes and tutorials was born out of countless collaborations and problem-solving discussions with Fastly's global community of customers. Fastly's new collection of edge cloud resources make it faster and safer for developers to discover, test, customize, and deploy edge cloud solutions.” Currently, Fastly has shared 66 code-based edge cloud solutions covering aspects like authentication, image optimization, logging, and more. It plans to add more solutions to the list in the near future. Denenberg shared, “Our initial launch of 66 recipes and four solution patterns were created from some of the most common and valuable solutions we’ve seen when working with our global customer base. However, this is just the beginning - many more solutions are on our radar to launch on a regular cadence. This is what has us really excited-- as we expose more of these solutions to customers, the more inspiration they have to go even further in their work, which creates a remarkable flywheel of innovation on our edge cloud.” Challenges when developing on the edge When asked about what edge cloud solutions Denenberg thinks developers often find difficult, he said, “I think difficulty is a tricky thing to address because engineering is a lot of times about tradeoffs. Those tradeoffs are most often realized when pursuing instant scalability, being able to run edge functions everywhere, and achieving low latency and microsecond boot time. He adds, “NoSQL saw tremendous growth because it presented the ability to achieve scale with very reasonable trade-offs based on the types of applications people were building that traditional SQL databases made very difficult, from an architectural perspective, like scaling writes linearly to a cluster easily, for example. So for me, given the wide variety of applications our customers can build, I think it’s about taking advantage of our platform in a way that improves the overall user experience, which sometimes just requires a shifting of the mindset in how those applications are architected.” We asked Denenberg whether other developers will be able to pitch in to expand this collection of resources. “We are already talking with customers who are excited to share what they have built on our platform that might allow others to achieve enhanced online experiences for their end users,” he told us. “Fastly has an internal team dedicated to reviewing the solutions customers are interested in sharing to ensure they have the same consistency and coding style that mirrors how we would publish them internally. We welcome the sharing of innovation from our customer base that continues to inspire us through their work on the edge.” Other initiatives by Fastly to empower developers Fastly is continuously contributing towards making the internet more trustworthy and safer by getting involved in projects like QUIC, Encrypted SNI, and WebAssembly. Last year, Fastly made three of its projects available on Fastly Labs: Terrarium, Fiddle, and Insights. Read also: Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Denenberg shared that there are many ways Fastly is contributing to the open source community. “Yes, empowering developers is at the forefront of what we do. As developers are familiar with the open-source caching software that we use, it makes adopting our platform easier. We give away free Fastly services to open source and nonprofit projects. We also continue to work on open source projects, which empower developers to build applications in multiple languages and run them faster and more securely at our edge.” Fastly also constantly tries to improve its edge cloud platform to meet its customers’ needs and empower them to innovate. “As an ongoing priority, we work to ensure that developers have the control and insight into our edge platform they need. To this end, our programmable edge provides developers with real-time visibility and control, where they can write and deploy code to push application logic to the edge. This supports modern application delivery processes and, just as importantly, frees developers to innovate without constraints,” Denenberg adds. He concludes, “Finally, we believe our values empower our community in several ways. At Fastly, we have chosen to grow with a focus on transparency, integrity, and inclusion. To do this, we are building a kind, ethical, and inclusive team that reflects our diverse customer base and the diversity of the developers that are creating online experiences. The more diverse our workforce, the easier it is to attract diverse talent and build technology that provides true value for our developer community across the world.” Follow Adam Denenberg on Twitter: @denen Learn more about Fastly and its edge-cloud platform at Fastly’s official website. More on cloud computing Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results How do AWS developers manage Web apps?
Read more
  • 0
  • 0
  • 3031

article-image-microsoft-power-bi-interview-part1-brett-powell
Amey Varangaonkar
09 Oct 2017
8 min read
Save for later

Ride the third wave of BI with Microsoft Power BI

Amey Varangaonkar
09 Oct 2017
8 min read
[dropcap]S[/dropcap]elf-service Business Intelligence is the buzzword everyone's talking about today. It gives modern business users the ability to find unique insights from their data without any hassle. Amidst a myriad of BI tools and platforms out there in the market, Microsoft’s Power BI has emerged as a powerful, all-encompassing BI solution - empowering users to tailor and manage Business Intelligence to suit their unique needs and scenarios. [author title="Brett Powell"]A Microsoft Power BI partner, and the founder and owner of Frontline Analytics LLC., a BI and analytics research and consulting firm. Brett has contributed to the design and development of Microsoft BI stack and Power BI solutions of diverse scale and complexity across the retail, manufacturing, financial, and services industries. He regularly blogs about the latest happenings in Microsoft BI and Power BI features at Insight Quest. He is also an organizer of the Boston BI User Group.[/author]   In this two part interview Brett talks about his new book, Microsoft Power BI Cookbook, and shares his insights and expertise in the area of BI and data analytics with a particular focus on Power BI. In part one, Brett shares his views on topics ranging from what it takes to be successful in the field of BI & data analytics to why he thinks Microsoft is going to lead the way in shaping the future of the BI landscape. In part two of the interview, he shares his expertise with us on the unique features that differentiate Power BI from other tools and platforms in the BI space. Key Takeaways Ease of deployment across multiple platforms, efficient data-driven insights, ease of use and support for a data-driven corporate culture are factors to consider while choosing a Business Intelligence solution for enterprises. Power BI leads in self-service BI because it’s the first Software-as-a-Service (SaaS) platform to offer ‘End User BI’ where anyone, not just a business analyst, can leverage powerful tools to obtain greater value from data. Microsoft Power BI has been identified as a leader in Gartner’s Magic Quadrant for BI and Analytics platforms, and provides a visually rich and easy to access interface that modern business users require. You can isolate report authoring from dataset development in Power BI, or quickly scale up or down a Power BI dataset as per your needs. Power BI is much more than just a tool for reports and dashboards. With a thorough understanding of the query and analytical engines of Power BI, users can customize more powerful and sustainable BI solutions. Part One Interview Excerpts - Power BI from a Bird’s Eye View On choosing the right BI solution for your enterprise needs What are some key criteria one must evaluate while choosing a BI solution for enterprises? How does Power BI fare against these criteria as compared with other leading solutions from IBM, Oracle and Qlikview? Enterprises require a platform which can be implemented on their terms and adapted to their evolving needs. For example, the platform must support on-premises, cloud, and hybrid deployments with seamless integration allowing organizations to both leverage on-premises assets as well as fully manage their cloud solution. Additionally, the platform must fully support both corporate business intelligence processes such as staged deployments across development and production environments as well as self-service tools which empower business teams to contribute to BI projects and a data driven corporate culture. Furthermore, enterprises must consider the commitment of the vendor to BI and analytics, the full cost of scaling and managing the solution, as well as the vendors’ vision for delivering emerging capabilities such as artificial intelligence and natural language. Microsoft Power BI has been identified as a leader in Gartner’s Magic Quadrant for BI and Analytics platforms based on both its currently ability to execute as well as its vision. Particularly now with Power BI Premium, the Power BI Report Server, and Power BI embedded offerings, Power BI truly offers organizations the ability to tailor and manage BI to their unique needs and scenarios. Power BI’s mobile application, available on all common platforms (iOS, Android) in addition to continued user experience improvements in the Power BI service provides a visually rich and common interface for the ‘anytime access’ that modern business users require. Additionally, since Power BI’s self-service authoring tool of Power BI Desktop shares the same engine as SQL Server Analysis Services, Power BI has a distinct advantage in enabling organizations to derive value from both self-service and corporate BI. The BI landscape is very competitive and other vendors such as Tableau and Qlikview have obtained significant market share. However, as organizations fully consider the features distinguishing the products in addition to the licensing structures and the integration with Microsoft Azure, Office 365, and common existing BI assets such as Excel and SQL Server Reporting Services and Analysis Services, they will (and are) increasingly concluding that Power BI provides a compelling value. On the future of BI and why Brett is betting on Microsoft to lead the way Self-service BI as a trend has become mainstream. How does Microsoft Power BI lead this trend? Where do you foresee the BI market heading next i.e., are there other trends we should watch out for?  Power BI leads in self-service BI because it’s the first software as a service (SaaS) platform to offer ‘End User BI’ in which anyone, not just a business analyst, can leverage powerful tools to obtain greater value from data. This ‘third wave’ of BI, as Microsoft suggests, further follows and supplements the first and second waves of BI in Corporate and self-service BI, respectively. For example, Power BI’s Q & A experience with natural language queries and integration with Cortana goes far beyond the traditional self-service process of an analyst finding field names and dragging and dropping items on a canvas to build a report. Additionally, an end user has the power of machine learning algorithms at their fingertips with features such as Quick Insights now built into Power BI Desktop. Furthermore, it’s critical to understand that Microsoft has a much larger vision for self-service BI than other vendors. Self-service BI is not exclusively the visualization layer over a corporate IT controlled data model – it’s also the ability for self-service solutions to be extended and migrated to corporate solutions as part of a complete BI strategy. Given their common underlying technologies, Microsoft is able to remove friction between corporate and self-service BI and allows organizations to manage modern, iterative BI project lifecycles.    On staying ahead of the curve in the data analytics & BI industry For someone just starting out in the data analytics and BI fields, what would your advice be? How can one keep up with the changes in this industry? I would focus on building a foundation in the areas which don’t change frequently such as math, statistics, and dimensional modeling. You don’t need to become a data scientist or a data warehouse architect to deliver great value to organizations but you do need to know the basic tools of storing and analysing data to answer business questions. To succeed in this industry over time you need to consistently invest in your skills in the areas and technologies relevant to your chosen path. You need to hold yourself accountable for becoming a better data professional and this can be accomplished by certification exams, authoring technical blogs, giving presentations, or simply taking notes from technical books and testing out tools and code on your machine. For hard skills I’d recommend standard SQL, relational database fundamentals, data warehouse architecture and dimensional model design, and at least a core knowledge of common data transformation processes and/or tools such as SQL Server Integration Services (SSIS) and SQL stored procedures. You’ll need to master an analytical language as well and for Microsoft BI projects that language is increasingly DAX. For soft skills, you need to move beyond simply looking for a list of requirements for your projects. You need to learn to become flexible and active – you need to become someone who offers ideas and looks to show value and consistently improve projects rather than just ‘deliver requirements’. You need to be able to have both a deeply technical conversation but also have a very practical conversation with business stakeholders. You need to able to build relationships with both business and IT. You don’t ever want to dominate or try to impress anyone but if you’re truly passionate about your work then this will be visible in how you speak about your projects and the positive energy you bring to work every day and to your ongoing personal development.   If you enjoyed this interview, check out Brett’s latest book, Microsoft Power BI Cookbook. In part two of the interview, Brett shares 5 Power BI features to watch out for, 7 reasons to choose Power BI to build enterprise solutions and more. Visit us tomorrow to read part two of the interview.
Read more
  • 0
  • 0
  • 2985
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-this-is-john-he-literally-wrote-the-book-on-puppet-an-interview-with-john-arundel
Packt
04 Dec 2017
8 min read
Save for later

"This is John. He literally wrote the book on Puppet" - An Interview with John Arundel

Packt
04 Dec 2017
8 min read
John Arundel is a DevOps consultant. Which means he helps businesses use software better. But when he's not supporting his clients, John is writing for Packt. John has written a number of books over the last few years, most recently Puppet 5 Beginner's Guide. Puppet is one of the most popular tools in the DevOp toolchain; it's a tool that gives administrators and architects significant control over their infrastructure. For that reason, it's a tool worth exploring - whatever field you're working in. It's likely to play a large part in the continued rise of DevOps throughout 2018. We spoke to John about Puppet, DevOps, and his new book - as well as his experience writing it. Packt: Your book is published now. How does it feel to be a published author? John Arundel: Pretty great! At one time I wrote technical manuals for Psion, the palmtop computer manufacturer. Thanks to a conservative house style, the kind of books I wrote said things like: “To access file operations, select the File menu”. Not exactly a page-turner. I’m very happy now to be able to publish a book which is written more or less exactly the way I want it, on a subject I find very interesting, and including a lot of jokes. What benefits did writing a book bring to your specialist area? JA: The funny thing is that despite being a Puppet user almost since the very beginning, I really don’t use many of its features. In fact, most of them have been added since I started using Puppet, and I don’t have a lot of time to experiment with new stuff, so writing the book was a great opportunity to delve into all the Puppet features I didn’t know about. I’m hoping that readers will also find out stuff they didn’t know and that will come in useful. If just one person is helped and inspired by this book... then I’m not giving refunds to the others. It’s done a lot to raise the profile of my consulting business; I was introduced to one potential client as “This is John. He literally wrote the book on Puppet”. I had to modestly point out that in fact, other, probably better books are available. Our authors usually have full-time jobs whilst writing for us. Was this the case for you and how did you approach managing your time? JA: As any freelancer knows, the job is more than full-time. I practically had to invent new physics to figure out a way of using my negative free time to write a book. I blocked out one day a week devoted to writing, and set myself a goal of a number of hours to achieve each month, which I mostly met. Because the book is so code-focused, I had to not only write about each technique I was describing, but also develop complete, working, reusable software in Puppet to implement it, and then test this on a virtual machine. Quite frequently I’d discover later that I’d been doing something wrong, or a behaviour in Puppet changed, and I’d have to go back and fix all the code. I’m sure there are still quite a few bugs, which I am going to pretend I’ve deliberately inserted to help the reader learn to debug and fix Puppet code: something they will, after all, spend a great deal of time doing. In all, what with researching, writing, coding, testing, fixing, editing, and complaining on Twitter, I spent about 200 hours on the book over 8 months. While writing your book, did you find that it overshadowed personal life in any way? How did you deal with this? JA: Not really. It could have, if I’d got into serious deadline trouble. Fortunately, I managed to keep up a continuous, manageable level of mild deadline trouble. I don’t think my friends or family noticed, except occasionally I’d say things at dinner like, “Could you pass the Puppet? I mean pepper.” Do you have any advice for other authors who may be interested in writing for Packt, but are still unsure? JA: Go for it! But make sure you have two hundred unallocated hours in your schedule. You’d be amazed how much time you can save by not watching TV, going out, putting on clothes, etc. Really, my advice would be to plan the book carefully - agreeing the outline in advance with your editor helps a lot. Henry Ford said that there are no big problems, just lots of little problems. Breaking down a book into chapters and sections and subsections and tackling them one by one makes it seem less daunting. And managing your time well helps avoid last-minute-essay syndrome. Do you have any tips for other authors, or tricks that you learnt whilst writing, that you'd like to share? JA: One good tip is, once you’ve written a chapter, let it lie fallow for a few weeks and then come back to it with a fresh eye. What you thought were immaculately-crafted sentences turn out to be pompous waffle. And what seemed clear and explicit now seems larded with techno-babble. I read somewhere that P.G. Wodehouse would stick each page of manuscript to the wall as he wrote it, somewhere around the skirting board level, and as he obsessively reworked and rewrote and polished the text he would gradually move it higher and higher up the wall until he judged it good enough - somewhere near the ceiling. Well, I’m not saying I’m P.G. Wodehouse — I’ll leave that for others to say — but it’s a useful way to think about the writing process. Rewrite, rewrite, rewrite! “What is written without effort,” Dr Johnson pointed out, “is in general read without pleasure.” Was there anything interesting that happened during the writing of the book? JA: Only in the sense that the Chinese use when they curse you to live in interesting times. Quite often I wrote myself to a standstill and just stared blankly at the laptop, hoping it would explain something complicated for me, or think up a useful and instructive example when I couldn’t. On one occasion I decided that the best thing to do with a certain long, difficult, and laboriously-constructed section was to delete it altogether, improving the book immeasurably as a result. The deleted scenes will be available on a forthcoming DVD, together with a ‘making of’ documentary which consists of me frowning at a screen for 200 hours and intermittently making tea. How did Packt’s Acquisition Editors help you - what kind of things did they help you with and how did they support you throughout the writing process? JA: The biggest help at the start was giving me structure by insisting on an outline, and then setting individual chapter deadlines to plan the writing time - then gently but persistently enforcing them. What was also very useful was to see sample chapters of other books, to get an idea of where I was supposed to be going, and getting very detailed feedback on the early chapters about exactly how to lay things out and how to make everything consistent. Beyond that, I was pleased and surprised by how little the editors interfered with what I was doing. By and large I was allowed to write my own book the way I wanted. No one suggested I write sentences like “To access file operations, select the File menu.” When I asked for help, I got it, and when I didn’t, I was left in peace and trusted to do the right thing. That’s a great way to write. What projects, if any, are you working on at the moment? Several people have asked what the next book’s going to be. I have said, only half-jokingly, that I might do one on Chef. I have a kind of a semi-formed idea about a book of system administration patterns and practices, based on my several decades worth of experience (read: mistakes). But just now I’m enjoying a break from writing, and I’m spending my negative free time reading other people’s books, playing Beethoven on my toy piano like Schroeder out of Peanuts, and learning to bake the perfect Cornish pasty. Ah! Excuse me, that was the oven timer. Thanks for taking the time to talk to us, John! You can find John's latest book, Puppet 5 Beginner's Guide, here.
Read more
  • 0
  • 1
  • 2824

article-image-translating-between-virtual-and-real-interview-artist-scott-kildall
Michael Ang
13 Feb 2015
9 min read
Save for later

Translating between the virtual and the real: Interview with artist Scott Kildall

Michael Ang
13 Feb 2015
9 min read
Scott Kildall is an artist whose work often explores themes of future-thinking and translation between the virtual and the real. His latest projects use physical data visualization - the transformation of data sources into physical objects via computer algorithms. We're currently collaborating on the Polygon Construction Kit, a software toolkit for building physical polygon structures from wireframe 3D models. I caught up with Scott to ask him about his work and how 3D printing and digital fabrication are changing the production of artwork. How would you describe your work?I write computer algorithms that generate physical sculptures from various datasets. This has been a recent shift in my art practice. Just five years ago, digital fabrication techniques, 3D printing, CNC machinery and other forms of advanced fabrication were simply too expensive for artists. Specifically, I've been diving into what the media ominously calls "big data," which entails thousands upon thousands of data points ranging from city infrastructure data to biometric feedback. From various datasets, I have been generating 3D-printed sculptures. Water Works - Imaginary Drinking Hydrants (2014) 3D-Printed Sculpture with laser-etched wood map What are some of the tools that you use?I write my own software code from the ground up to both optimize the production process, and create a unique look for my work. My weapon of choice is openFrameworks, a C++ toolkit that is relatively easy to use for a seasoned applications programmer. The other open source tool I use is Processing, which is a quick and dirty way to prototype ideas. Python, my new favorite language, is excellent for transforming and cleaning datasets, which is the not-so-glamorous side of making "data art". You've just completed some residencies and fellowships, can you tell us about those?In 2014, I was an Artist In Residence at Autodesk in San Francisco, where I live. Autodesk has an amazing shop facility including 6 state-of-the art Objet 500 printers. The resulting prints are resin-based and capture accurate details. During a several month period, I was able to iteratively experiment with 3D printing at a rate that was much faster than maintaining my own extrusion 3D printer. Data Crystals (2014) Incidents of Crime Data from San Francisco The first project I worked on is called Data Crystals, which uses public data sets from the city government of San Francisco, which anyone can download from the data portal at SFGov.org. The city's open data includes all sorts of goodies such as geolocated points for incidents of crime and every parking meter in the city. I mapped various data points on an x-y plane using the latitude and longitude coordinates. The z-plane was then a dimension of time or space. To generate the "Crime Data" crystal, I worked with over 30,000 data points. My code represented each data points as a simple cube with the size being proportional to the severity of the crime. I then ran clustering algorithms to create one cohesive object, which I call a "crystal", like a synthetic rock that a data miner might find. In a sense you're mining an abstract data source into a physical object...It was more like finding a concrete data source and then turning it into an abstract physical object. With conventional 2D data visualizations you can clearly see where the hotspots of crime, or other data points, might be on a map. However, the Data Crystals favor aesthetics over legibility. The central question I wanted to answer was "what does data look like?" When people create screen-based data visualizations, they focus on what story to tell. I was intrigued by the abstract data itself and so made art objects which you could look at from different vantage points. What is it about having the data occupy a physical space that's important to you?When data occupies a physical space, it is static, like a snapshot in time. Rather than controlling time, like you would on a slider with a screen-based visualization, you can examine the minutiae of the physical characteristics. The data itself invites a level of curiosity that you don't get with a mediated screen-based interaction. Real objects tap into the power of conventional perception, which is innate to how our brains interact with the world. Tell us a bit about your series of sculptures that were created in the virtual world of Second Life and then physically created using Pepakura software and papercraft techniques.An earlier project that I worked on, in collaboration with my partner Victoria Scott, is No Matter (2008). This project was instrumental in my development about how to transform the imaginary into the real. For this project, we worked with the concept of imaginary objects, which are things that have never physically been built, but exist in our shared imagination. They include items from mythology like the Holy Grail or the Trojan Horse, from fiction like the Maltese Falcon or the Yellow submarine, or impossible objects/thought experiments like the Time Machine or Schrodinger's Cat. We constructed these objects in the imaginary world of Second Life and then extracted them as "digital plunder" then rebuilt them as paper sculptures in real space. No Matter (2008) Second Life Installation at Ars Virtua No Matter (2008) Yellow Submarine paper sculpture Because they were paper sculptures that were physically fabricated, there were physical constraints such that the forms themselves had to be vastly simplified to smaller faceted objects. The collision between this faceted proto-object with beautiful high resolution prints resonate with most viewers on an aesthetic level. Working with that kind of virtual space led me to thinking about the question of data -- could this intangible "thing" also be represented materiality? No Matter (2008) Installation at Huret & Spector Gallery What are you working on now?I'm developing a new project called Machine Data Dreams, which examines the question of "how do machines think?" To make computers function, humans program code in languages such as JavaScript or Python or C++. There are whole sets of people that are literate with these machine languages, while most others in the world don't "speak" them. Knowing how to code gives you power and money, though usually not prestige. However, understanding how machines process language will be increasingly important as they will undoubtedly be increasingly integrated with human biology. My proposition is to create a room-based installation that reflects the structure of language and how machines might view the world through a datasets representing machine syntax. What I will be doing is taking machine language (e.g. Javascript, Python, C++) and translating that to a language-based data sets. From these, I will algorithmically generate a cavelike 3D model with triangulated faces. The Polycon Construction Kit -- which you developed -- will be instrumental in making this happen. Last year, I had sketched out ideas in my notebook about creating custom 3D-printed connectors for large-scale sculptural installations, and then I found out that you had already been working on this technology. So, thank you for inviting me to collaborate on Polycon! I'm grateful to be figuring out how to do this with a trusted colleague. What are some of the trends or new technologies that you're excited about?There's so much happening in field of digital fabrication! For example, 3D printing technology has so much to offer now, and we're at a pioneering stage, akin to Photoshop 1.0. Artists are experimenting with the medium and can shape the dialog of what the medium itself is about. It's clear to me that 3D printing / 3D fabrication will be a larger part of our economy and our universe. It fundamentally changes the way materials are produced. Many have made this observation, but it is still understated. 3D printing is redefining the paradigm of material production, which also affects art production and factory production. This is how capitalist-based economies will operate in the next 20 or 30 years. From an artistic standpoint, working with digital fabrication technology has changed the way I think about sculpture. For example, I can create something with code that I never thought was even possible. I don't even know what the forms will look like when I write the code. The code generates forms and I find unexpected results ranging from the amazing to the mediocre to the crappy. Then I can tweak the algorithms to focus on what works best. You've had a chance to work with some of the most advanced technologies through the Autodesk residency. Now you have your own 3D printer in your garage. Do you see a difference in your creative process using the really high-end machines versus something you can have in your garage? Working with the high-end machines is incredible because it gives you access to making things as perfect as you can make them. I went from working with printers that probably cost a half a million dollars to a printer that I got for $450. Like half a million to half a thousand?Yes! The Autodesk printers are three orders of magnitude more expensive. The two types of printers have vastly different results. With my garage setup, I have the familiar tales: messed up 3D prints and the aches and pains of limit switches, belts and stepper motors. I've been interested in exploiting the glitches and mistakes. What happens when you get bad data? What happens when you get glitchy material errata? My small extrusion printer gives me a lot more appreciation for fixing things, but when making something precise I'd much rather work with the high-quality 3D printers. Where can we see more of your work?All my work is on my website at http://kildall.com. My blog at http://kildall.com/blog has my current thought processes. You can follow me on Twitter at @kildall About the Author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. He is the creator of the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 2805

article-image-interview-hussein-nasser
Hussein Nasser
01 Jul 2014
4 min read
Save for later

An Interview with Hussein Nasser

Hussein Nasser
01 Jul 2014
4 min read
What initially drew you to write your book for Packt Publishing? In 2009, I started writing technical articles on my personal blog. I would write about my field, Geographic Information Systems, or any other technical articles. Whenever a new technology emerged, a new product,or sometimes even mere tips or tricks,I would write an article about it. My blog became a well-known site in GIS, and that is when Packt approached me with a proposed title. I always wanted to write a book but I never expected that the opportunity would knock on my door. I thank Packt for giving me that opportunity. When you began writing, what were your main aims? My main aim was to write a book that readers in my domain could grab and benefit from. While working on a chapter, I would always imagine a reader picking up the book and reading that particular chapter and asked myself, what could I do better? And then I tried to make the chapter as simple as possible and leave nothing unexplained. What did you enjoy most and what was most rewarding about the experience of writing? Think about all the knowledge, information, ideas, and tips that you possess. You knew you had it in you somewhere but you didn’t know the joy and delight you would feel when this knowledge slipped through your fingertips into a physical medium. With each reading I would reread and polish the chapters;it seems there is always room for improvement in writing. Why, in your opinion, is ArcGIS exciting to discover, read, and write about? ArcGIS is not a new technology; it has been around for more than 14 years. It has become mature and polished during these years. It has expanded and started touching other bleeding-edge technologies like mobile, web, and the cloud. Everyday this technology is increasingly worth discovering and everyday it benefits areas like health, utilities, transportation, and so on. Why do you think interest in GIS is on the rise? If you read The Tipping Point,by Malcolm T. Gladwell, you will understand that the smartphone was actually a tipping point for the GIS technology. GIS was only used by enterprises and big companies who wanted to add the location dimension to their tabular data so it helped them better visualize and analyze their information. With smartphones and GPS, geographic location became more relevant. Pictures taken with smartphones are tagged with location information. Applications were developed to harness the power of GIS for routing, finding the best restaurants in an area, calculating shortest routes, finding information based on geo-fencing technology that sends you text messages when you pass by a shop, and so on. The popularity of GIS is rising and so is the interest in adapting this technology. What do you see on the horizon for GIS? High end processing servers are being sent to the cloud while we are carrying smaller and smaller gadgets. Networking is getting stronger every day with the LTE and 4G networks already setup in many countries. Storage has become no issue at all. The Web architecture is dominant so far and it is the most open and compatible platform that has ever existed. As long as we keep using devices, we will need geographic information systems. The data can be consumed and fetched swiftly from anywhere in the world from the smallest device. I believe this will evolve to an extent that everything valuable we own can be tagged with a location, so when we misplace something or lose it, we can always use GIS to locate it. Any tips for new authors? My role model author is Seth Godin; the first book I ever read was his. When I told him about my new book and asked him for any advice he might give me as a new author, he told me and I quote,″Congratulations, Hussein .This is thrilling to hear; my only advice is to keep writing!″ I took his advice and now I′m working on my second book with Packt. Another personal tip I can give to new authors is thatwriting needs focus, and I find music the best soul feeding source. While working on my first book,I discovered this site www.stereomood.com, which plays music that will help you write. Another thing is to use a clutter free word processor application that will blank the entire screen so you are only left with your words. I use WriteMonkey for Windows and Focus writer for Mac.
Read more
  • 0
  • 0
  • 2717

article-image-industrial-internet-iiot-architects
Aaron Lazar
21 Nov 2017
8 min read
Save for later

Why the Industrial Internet of Things (IIoT) needs Architects

Aaron Lazar
21 Nov 2017
8 min read
The Industrial Internet, the IIoT, the 4th Industrial Revolution or Industry 4.0, whatever you may call it, has gained a lot of traction in recent times. Many leading companies are driving this revolution, connecting smart edge devices to cloud-based analysis platforms and solving their business challenges in new and smarter ways. To ensure the smooth integration of such machines and devices, effective architectural strategies based on accepted principles, best practices, and lessons learned, must be applied. In this interview, Shyam throws light on his new book, Architecting the Industrial Internet, and shares expert insights into the world of IIoT, Big Data, Artificial Intelligence and more. Shyam Nath Shyam is the director of technology integrations for Industrial IoT at GE Digital. His area of focus is building go-to-market solutions. His technical expertise lies in big data and analytics architecture and solutions with focus on IoT. He joined GE in Sep 2013 prior to which he has worked in IBM, Deloitte, Oracle, and Halliburton. He is the Founder/President of the BIWA Group, a global community of professional in Big Data, analytics, and IoT. He has often been listed as one of the top social media influencers for Industrial IoT. You can follow him on Twitter @ShyamVaran.   He talks about the IIoT, the various impacts that technologies like AI and Deep Learning will have on IIoT and he gives a futuristic direction to where IIoT is headed towards. He talks about the challenges that Architects face while architecting IIoT solutions and how his book will help them overcome such issues. Key Takeaways The fourth Industrial Revolution will break silos and bring IT and Ops teams together to function more smoothly. Choosing the right technology to work with involves taking risks and experimenting with custom solutions. The Predix platform and Predix.io allow developers and architects, quickly learn from others and build working prototypes that can be used to get quick feedback from the business users. Interoperability issues and a lack of understanding of all the security ramifications of the hyper-connected world could be a few challenges that adoption of IIoT must overcome Supporting technologies like AI, Deep Learning, AR and VR will have major impacts on the Industrial Internet In-depth Interview On the promise of a future with the Industrial Internet The 4th Industrial Revolution is evolving at a terrific pace. Can you highlight some of the most notable aspects of Industry 4.0? The Industrial Internet is the 4th Industrial Revolution. It will have a profound impact on both the industrial productivity as well as the future of work. Due to more reliable power, cleaner water, and Intelligent Cities, the standard of living will improve, at large for the world citizens. Industrial Internet will forge new collaborations between the IT and OT, in the organizations, and each side will develop a better appreciation of the problems and technologies of the other. They will work together to create smoother overall operations by breaking the silos. On Shyam’s IIoT toolbox that he uses on a day to day basis You have a solid track record of architecting IIoT applications in the Big Data space over the years. What tools do you use on a day-to-day basis? In order to build Industrial Internet applications, GE's Predix is my preferred IIoT platform. It is built for Digital Industrial solutions, with security and compliance baked into it. Customer IIoT solutions can be quickly built on Predix and extended with the services in the marketplace from the ecosystem. For Asset Health Monitoring and for reducing the downtime, Asset Performance Management (APM) can be used to get a jump start and its extensibility framework can be used to extend it. On how to begin one’s journey into building the Industry 4.0 For an IIoT architect, what would your recommended learning plan be? What aspects of architecting Industry 4.0 applications are tricky to master and how does your book Architecting the Industrial Internet, prepare its readers to be industry ready? An IIoT Architect can start with the book Architecting the Industrial Internet, to get a good grasp of the area broadly. This book provides a diverse set of perspectives and architectural principles, from authors who work in GE Digital, Oracle and Microsoft. The end-to-end IIoT applications involve an understanding of sensors, machines, control systems, connectivity and cloud or server systems, along with the understanding of associated enterprise data, the architect needs to focus on a limited solution or proof of concept first. The book provides coverage for the end-to-end requirements of the IIoT solutions for the architects, developers and business managers. The extensive set of use cases and case studies provides examples from many different industry domains to allow the readers to easily related to it. The book is written, in a style that would not overwhelm the reader, yet explain the workings of the architecture and the solutions. The book will be best suited for Enterprise Architects and Data Architects who are trying to understand how IIoT solutions differ from traditional IT solutions. The layer-by-layer description of the IIoT Architecture will provide a systematic approach to help develop a deep understanding, for Architects. IoT Developers who have some understanding of this area can learn the IIoT platform-based approach to building solutions quickly. On how to choose the best technology solution to optimize ROI There are so many IIoT technologies, that manufacturers are confused as to how to choose the best technology to obtain the best ROI. What would your advice to manufacturers be, in this regard? The manufacturers and operation leaders look for quick solutions to known issues, in a proven way. Hence, often they do not have the appetite to experiment with a custom solution, rather they like to know where the solution provider has solved similar problems and what was the outcome. The collection of use cases and case studies will help business leaders get an idea of the potential ROI while evaluating the solution. Getting to know Predix, GE’s IIoT platform, better Let's talk a bit about Predix, GE's IIoT platform. What advantages does Predix offer developers and architects? Do you foresee any major improvements coming to Predix in the near future? The GE's Predix platform has a growing developer community that is approaching 40,000 strong. Likewise, the ecosystem of Partners is approaching 1000. Coupled with the free access to create developer accounts on Predix.io, the developers and architects can quickly learn from others and build working prototypes that can be used to get quick feedback from the business users. The catalog of microservices at Predix.io will continue to expand. Likewise, applications written on top of Predix, such as APM and OPM (Operations Performance Management) will continue to become feature-rich, providing coverage to many common Digital Industrial challenges. On the impact of other emerging technologies like AI on IIoT What according to you will the impact be of AI and Deep Learning, on IIoT? AI and Deep Learning help to build robust Digtal Twins of the industrial assets. These Digital Twins, will make the job of predictive maintenance and optimization, much easier for the operators of these assets. Further, IIoT will benefit from many new advances in technologies like AI, AR/VR and make the job of Field Services Technicians easier. IIoT is already widely used in energy generation and distribution, Intelligent Cities for law enforcement and to ease traffic congestion. The field of healthcare is evolving, due to increasing use of wearables. Finally, precision agriculture is enabled by IoT as well. On likely barriers to IIoT adoption What are the roadblocks you expect in the adoption of IIoT? Today the challenges to rapid adoption of IoT, are interoperability issues and lack of understanding of all the security ramifications of the hyper-connected world. Finally, how to explain the business case of the IoT to the decision makers and different stakeholders is still evolving. On why Architecting the Industrial Internet is a must read for Architects Would you like to give architects 3 reasons on why they should pick up your book? It is written by IIoT practitioners from large companies who are building solutions for both internal and external consumption. The book captures the architectural best practices and advocates a platform based approach, to solutions. The theory is put to practice in the form of use cases and case studies, to provide a comprehensive guide to the architects. If you enjoyed this interview, do check out Shyam’s latest book, Architecting the Industrial Internet.
Read more
  • 0
  • 0
  • 2631
article-image-piwars-mike-hornes-world-raspberry-pi-robotics
Fahad Siddiqui
09 Dec 2015
6 min read
Save for later

PiWars - Mike Horne's world of Raspberry Pi Robotics

Fahad Siddiqui
09 Dec 2015
6 min read
Robotics competitions have evolved from the time I participated in themduring my college days. Thanks to microboards such as the Raspberry Pi, it’s much more accessible – it could quite literally be described as ‘child’s play’. Mike Horne, the organizer of PiWars and co-organiser of CamJam, alongside his friend Tim Richardson, has taken his close connection to the Raspberry Pi project to inspire tech fans and hackers of all ages. PiWars is unique- it’s not just about knocking over your combatant’s robot, or following the terrain, it’s about the entire learning and development process.I was lucky enough to get to talk to Michael about PiWars, robotics and the immense popularity of Raspberry Pi. What kick-started PiWars and CamJam? CamJam started because I couldn’t understand why there wasn’t a Raspberry Jam in the Pi’s home town. There had been a couple of Cambridge Jams but they stopped quite early. I resurrected it by starting small (with just 30 people in one room) and it’s grown from there. Tim Richardson came onboard as co-planner after my second Jam and encouraged me to get a larger venue where we could run workshops as well as talks. We now work hand-in-hand to make the events as good as possible. PiWars was Tim’s idea. We both fondly remember the television programme ‘Robot Wars’ and he wondered whether we couldn’t do something similar, but with challenges instead of ‘fights’. And it all went from there. What sets PiWars’ apart from other robotics challenge? What is your vision 2020? What sets it apart first of all is that it is ‘non-destructive’. Although we used the name PiWars, no robots are intentionally damaged. We believe this is key to the enjoyment of the competitors as it means their good work isn’t destroyed. Apart from that, the use of the Raspberry Pi makes it unique – each robot must have a Pi at its core. When was the last time you competed in a robotics challenge or created a robot? I’ve personally never competed in a robotics challenge – the opportunities just haven’t been there. I did actually go and see Robot Wars being filmed once, which was exciting! I created a robot about two weeks ago whilst preparing for the launch of CamJamEduKit 3. It’s a robotics kit that’s available from The Pi Hut for £17 and contains everything you need to build a robot except batteries and a chassis (although the box it comes in makes a really good chassis!) You guys did a great job in organising thePiWars, CamJam and RaspberryPi birthday party. What are the challenges you faced, and ideas you came up with? Mostly the challenge is two-fold: 1. Persuading people to come and do talks, help with workshops and give general help on the day. 2. Logistics – it takes a lot of paperwork, spreadsheets and checklists to run an event on this scale. It’s always about working out what scale of event you want to run. CamJam is pretty steady now as we’ve got a structure. Pi Wars, being the second year, has expanded and changed organically. For The Big Birthday Weekend we came up with the idea of having two lots of workshops running at the same time as two lots of talks. Ideas-wise, we use beer to get things kicked off J. Tim’s great with coming up with new ways to make the events better. The Marketplace area was his idea. Show-and-Tell was mine. It’s a great collaboration. Not everyone could participate/physically be present in such competitions, do you think hosting a virtual competition though skype can be possible? We did consider it last year, actually! Someone from Australia wanted to send his robot via freight and control it over the Internet. We didn’t think that would work due to technical limitations. The main problem with holding a virtual competition is: where do you put the challenge courses? Do you have them in one location and then have robots remote-controlled or do you have the competitors recreate the courses in their location somehow? Then, how do you deal with the video streaming to spectators? How can robotics be taught in an effective manner with limited resources? How do you think Packt is contributing? The main barrier to entry with robotics is not the cost of equipment, although that does play a part. The main barrier is lack of material to support the learning. It’s one of the things we’ve concentrated on with the EduKits – good, solid resource worksheets. Packt have been doing a great job by publishing several books which contain at least an element of robotics, and sometimes by devoting entire publications to the subject. You may have seen some of our books mentioned in the MagPi, but do you use books to learn about Raspberry Pi yourself? I do. I’ve learned a lot of the basics from Adventures in Raspberry Pi (by Carrie Anne Philbin) and use Alex Bradbury and Ben Everard’s Python book as a reference. I’ve also looked at several Packt publications for inspiration for Raspberry Pi projects. Complete these sentences… Robotics challenge is not about smashing, it is… about learning how to give your robot the skills it needs. PiZero is…incredibly cute and brings a lot of hope for the future of embedded and IoT Raspberry Pi projects. Code quality, build quality, aesthetics and blogging is not just to rank the robot, it helps to… focus the minds of the competitors in building the best robot they can. My favourite Raspberry Pi project… at the moment is probably the S.H.I.E.L.D.-inspired ‘den’ I blogged about recently. Long-term, I really like Dave Akerman’s work on getting pictures from near-space using high-altitude balloons with a Pi and camera module. My words of wisdom for young hacker are… “Don’t be limited by anything, not even your imagination. Push yourself to come up with new and interesting things to do, and don’t be afraid to take someone else’s idea and run with it.” This or That- Tea or coffee? Coffee Linux or Python? Both GeekGurl or RaspberryPi Guy? They’re both friends – I’m not landing myself in hot water for that one! Terminators or Transformers? Transformers, but the ones from the 1980s, not Michael Bay’s questionable version! Raspberry Pi or BBC micro:bit? Raspberry Pi, all the way. The micro:bit just doesn’t do enough to really stretch youngsters. We’re big fans of DIY tech at Packt – like Raspberry Pi, we’re passionate about innovation and new ideas. That’s why from Monday 7th to Sunday 13th December we’re celebrating Maker Week. We’re giving away free eBooks on some of the world’s most exciting microcomputers – including Raspberry Pi and Arduino – and offering 50% off some of our latest guides to creative tech.
Read more
  • 0
  • 0
  • 2613

article-image-learn-ibm-spss-modeler
Amey Varangaonkar
03 Nov 2017
9 min read
Save for later

Why learn IBM SPSS Modeler in 2017

Amey Varangaonkar
03 Nov 2017
9 min read
IBM’s SPSS Modeler provides a powerful, versatile workbench that allows you to build efficient and accurate predictive models in no time. What else separates IBM SPSS Modeler from other enterprise analytics tools out there today? To know just that, we talk to arguably two of the most popular members of the SPSS community. [box type="shadow" align="" class="" width=""] Keith McCormick Keith is a career-long practitioner of predictive analytics and data science, has been engaged in statistical modeling, data mining, and mentoring others in this area for more than 20 years. He is also a consultant, an established author, and a speaker. Although his consulting work is not restricted to any one tool, his writing and speaking have made him particularly well known in the IBM SPSS Statistics and IBM SPSS Modeler communities. Jesus Salcedo Jesus is an independent statistical consultant and has been using SPSS products for over 20 years. With a Ph.D., in Psychometrics from Fordham University, he is a former SPSS Curriculum Team Lead and Senior Education Specialist, and has developed numerous SPSS learning courses and trained thousands of users.[/box] In this interview with Packt, Keith and Jesus give us more insights on the Modeler as a tool, the different functionalities it offers, and how to get the most out of it for all your data mining and analytics needs. Key Interview Takeaways IBM SPSS Modeler is easy to get started with but can be a tricky tool to master Knowing your business, your dataset and what algorithms you are going to apply are some key factors to consider before building your analytics solution with SPSS Modeler SPSS Modeler’s scripting language is Python, and the tool has support for running R code IBM SPSS Modeler Essentials helps you effectively learn data mining and analytics, with a focus on working with data than on coding Full Interview Predictive Analytics has garnered a lot of attention of late, and adopting an analytics-based strategy has become the norm for many businesses. Why do you think this is the case?   Jesus: I think this is happening because everyone wants to make better-informed decisions.  Additionally, predictive analytics brings the added benefit of discovering new relationships that you were previously not aware of. Keith: That’s true, but it’s even more exciting when the models are deployed and are potentially driving automated decisions. With over 40 years of combined experience in this field, you are master consultants and trainers, with an unrivaled expertise when it comes to using the IBM SPSS products. Please share with us the story of your journey in this field. Our readers would also love to know how your day-to-day schedule looks like.   Jesus: When I was in college, I had no idea what I wanted to be. I took courses in many areas, however I avoided statistics because I thought it would be a waste of time, after all, what else is there to learn other than calculating a mean and plugging it into fancy formulas (as a kid I loved baseball, so I was very familiar with how to calculate various baseball statistics). Anyway, I took my first statistics course (where I learned SPSS) since it was a requirement, and I loved it. Soon after I became a teaching assistant for more advanced statistics courses and I eventually earned my Ph.D. in Psychometrics, all the while doing statistical consulting on the side. After graduate school, my first job was as an education consultant for SPSS (where I met Keith). I worked at SPSS (and later IBM) for seven years, at first focusing on training customers on statistics and data-mining, and then later on developing course materials for our trainings. In 2013 Keith invited me to join him as an IBM partner, so we both trained customers and developed a lot of new and exciting material in both book and video formats. Currently, I work as an independent statistical and data-mining consultant and my daily projects range from analyzing data for customers, training customers so they can analyze their own data, or creating books and videos on statistics and data mining. Keith: Our careers have lots of similarities. My current day to day is similar too. Lately, about 1/3rd of my year is lecturing and curriculum development for organizations like TDWI (Transforming Data with Intelligence), The Modeling Agency, and UC Irvine Extension. The majority of my work is in predictive analytics consulting. I especially enjoy projects where I’m brought in early and can help with strategy and planning. Then, the coach and mentor take over a team until they are self-sufficient. Sometimes building the team is even more exciting than the first project because I know that they will be able to do many more projects in the future. There is a plethora of predictive analytics tools used today - for desktop and enterprises. IBM SPSS Modeler is one such tool. What advantages does SPSS Modeler have over the others, in your opinion? Keith: One of our good friends who co-authored the IBM SPSS Modeler Cookbook made an interesting comment about this at a conference. He is unique in that he has done one-day seminars using several different software tools. As you know, it is difficult to present data mining in just one day. He said that only with Modeler he is able to spend some time on each of the CRISP-DM phases of a case study in a day. I think he feels this way because it’s among the easiest options to use. We agree. While powerful, and while it takes a whole career to master everything, it is easy to get started. Are there any prerequisites for using SPSS Modeler? How steep is the learning curve in order to start using the tool effectively? Keith: Well, the first thing I want to mention is that there are no prerequisites for our PACKT video IBM SPSS Modeler Essentials. In that, we assume that you are starting from scratch. For the tool in general, there aren’t any specific requisites as such, however knowing your data, and what insights you are looking for always helps. Jesus: Once you are back at the office, in order to be successful on a data mining project or efficiently utilize the tool, you’ll need to know your business, your data, and the modeling algorithm you are using. Keith: The other question that we get all the time is how much statistics and machine learning do you have to know. Our advice is to start with one or maybe two algorithms and learn them well. Try to stick to algorithms that you know. In our PACKT course, we mostly focus on just Decision Trees, which one of the easiest to learn. What do you think are the 3 key takeaways from your course - IBM SPSS Modeler Essentials? The 3 key takeaways from this course, we feel are: Start slow. Don’t pressure yourself to learn everything all at once. There are dozens of “nodes” in Modeler. We introduce the most important ones so start there. Be brilliant in the basics. Get comfortable with the software environment. We recommend the bests ways to organize your work. Don’t rush to Modeling. Remember the Cross Industry Standard Process for Data Mining (CRISP-DM), which we cover in the video. Use it to make sure that you proceed systematically and don’t skip critical steps. IBM recently announced that SPSS Modeler would be available freely for educational usage. How can one make the most of this opportunity? Jesus: A large portion of the work that we have done over the past few years has been to train people on how to analyze data. Professors are in a unique position to expose more students to data mining since we teach only those students whose work requires this type of training, whereas professors can expose a much larger group of people to data mining. IBM offers several programs that support professors, students, and faculty; for more information visit: https://www-01.ibm.com/software/analytics/spss/academic/ Keith: When seeking out a university class, whether it be classroom or online, ask them if they use Modeler or if they allow you to complete your homework assignments in Modeler. We recognize that R based classes are very popular now, but you potentially won’t learn as much about Data Mining. Sometimes too much of the class is spent on coding so you learn R, but learn less about analytics. You want to spend most of the class time actively working with data and producing results. With the rise of open source languages such as R and Python and their applications in predictive analytics, how do you foresee enterprise tools like SPSS Modeler competing with them? Keith: Perhaps surprisingly, we don’t think Modeler does compete with R or Python. A lot of folks don’t know that Python is Modeler’s scripting language. Now, that is an advanced feature, and we don’t cover it in the Essentials video, but learning Python actually increases your knowledge of Modeler. And Modeler supports running R code right in a Modeler stream by using the R nodes. So Modeler power users (or future power users) should keep learning R on their to-do list. If you prefer not to use code, you can produce powerful results without learning either by just using Modeler straight out of the box. So, it really is all up to you. If this interview has sparked your interest in learning more about IBM SPSS Modeler, make sure you check out our video course IBM SPSS Modeler Essentials right away!
Read more
  • 0
  • 0
  • 2562

article-image-technology-opens-up-so-many-doors-an-interview-with-sharon-kaur-from-school-of-code
Packt
14 Feb 2018
5 min read
Save for later

"Technology opens up so many doors" - An Interview with Sharon Kaur from School of Code

Packt
14 Feb 2018
5 min read
School of Code is a company on a mission to help more people benefit from technology. It has created an online multiplayer platform that aims to make coding fun, simple and accessible to all. This platform has been used by over 120,000 people since its launch in December 2016, and School of Code recently won the ‘Transforming Lives’ award at the 2017 Education Awards. The company was founded by Chris Meah while he was completing his PhD in Computer Science at the University of Birmingham.  As headline sponsors, Packt founder and CEO Dave Maclean shares his thoughts on the programme. “The number and diversity of the applicants proves how many people in Birmingham are looking to learn key skills like HTML, CSS, Javascript and Node.JS. Packt is excited to sponsor School of Code’s Bootcamp participants to increase the population of skilled developers in the West Midlands, which will have an impact on the growth of innovative start-ups in this region.” We spoke to Sharon Kaur, who's been involved with a School of Code bootcamp about her experience and for her perspective on tech in 2018. Packt: Hi Sharon! Tell us a little about yourself. Sharon Kaur: My name is Sharon. I am a choreographer and dancer for international music groups. I am also an engineering and technology advocate and STEM Ambassador for the UK and India – my main aim is getting more young girls and ethnic minorities interested in and pursuing a career in science, technology and engineering. What were you doing before you enrolled for School of Code and what made you want to sign up? I previously studied my BEng honours and MSc degrees at University of Surrey, in general and medical engineering. I worked in the STEM education industry for a few years and then gained my teaching qualification in secondary school/sixth form Science in Birmingham. I recently started learning more about the technology industry after completing an online distance-learning course in cyber security. I was on Facebook one day in June and I saw an advert for the first ever School of Code Bootcamp, and I just decided to dive in and go for it! Do you think there is a diversity issue in the tech sector? Has it affected you in any way? I definitely think there is a major problem in the technology industry, in terms of diversity. There are far too many leadership and management positions taken up by upper/middle class, white men. There needs to be more outreach work done to attract more women and ethnic minority people into this sector, as well as continuing to work with them afterwards, to prevent them from leaving tech in the middle of their careers! This has not affected me in any direct way, but as a female from an engineering background, which is also a very male-dominated sector, I have experienced some gender discrimination and credit for work I produced being given to someone else. Why do you think making technology accessible to all is important? Technology opens up so many doors to some really exciting and life-fulfilling work. It really is the future of this planet, and in order to keep improving the progress of the global economy and human society, we need more and more advanced technology and methods, daily. This means that there is a dire need for a large number of highly competent employees working continuously in the tech sector. What do you think the future looks like for people working in the tech industry? Will larger companies strive to diversify their workforce, and, why should they? In my opinion, the future looks extremely exciting and progressive! Technology will only become more and more futuristic, and we could be looking at getting more into the sci-fi age, come the next few centuries, give or take. So, the people who will work in the tech sector will be highly sought after – lucky them! I would hope though, that large corporations will change their employee recruitment policies, in terms of a more diverse intake, if they truly want to reach the top of their games, with maximum efficiency and employee wellbeing. School of Code encourages the honing of soft skills through networking, team work and project management. Do you think these skills are vital for the future of the tech industry and attracting a new generation, shaking off the stereotype that all coders are solitary beings? Why? Yes, definitely – soft skills are just as important, if not slightly more, than the technical aptitude of an employee in the tech industry! With collaboration and a business acumen, we can bring the world of technology together and use it to make a better life for every human being on this planet. The technology industry needs to show its solidarity, not its divisiveness, in attracting the next generation of young techies, if it wants to maintain its global outreach. What advice would you give to someone who wanted to get into the tech sector but may be put off by the common preconception that it is made up of male white privilege? I would say go for it, dive in at the deep end and come out the other side the better person in the room! Have the courage to stand up for your beliefs and dreams, and don't ever let anyone tell you or make you feel like you don't deserve to be standing there with everyone else in the room – pick your battles wisely, become more industry – and people-savvy, choose your opportune moment to shine, and you'll see all the other techies begging you to work with them, not even for them! Find out more about School of Code.  Download some of the books the Bootcampers found useful during the course: Thinking in HTML Thinking in CSS Thinking in JS series  MEAN Web Development React and React Native Responsive Web Design
Read more
  • 0
  • 0
  • 2341
article-image-machine-learning-can-useful-almost-every-problem-domain-interview-sebastian-raschka
Packt Editorial Staff
04 Sep 2017
9 min read
Save for later

Has Machine Learning become more accessible?

Packt Editorial Staff
04 Sep 2017
9 min read
Sebastian Raschka is a machine learning expert. He is currently a researcher at Michigan State University, where he is working on computational biology. But he is also the author of Python Machine Learning, the most popular book ever published by Packt. It's a book that has helped to define the field, breaking it out of the purely theoretical and showing readers how machine learning algorithms can be applied to everyday problems. Python Machine Learning was published in 2015, but Sebastian is back with a brand new edition, updated and improved for 2017, working alongside his colleague Vahid Mirjalili. We were lucky enough to catch Sebastian in between his research and working on the new edition to ask him a few questions about what's new in the second edition of Python Machine Learning, and to get his assessment of what the key challenges and opportunities in data science are today. What's the most interesting takeaway from your book? Sebastian Raschka: In my opinion, the key take away from my book is that machine learning can be useful in almost every problem domain. I cover a lot of different subfields of machine learning in my book: classification, regression analysis, clustering, feature extraction, dimensionality reduction, and so forth. By providing hands-on examples for each one of those topics, my hope is that people can find inspiration for applying these fundamental techniques to drive their research or industrial applications. Also, by using well-developed and maintained open source software, makes machine learning very accessible to a broad audience of experienced programmers as well as people who are new to programming. And introducing the basic mathematics behind machine learning, we can appreciate machine learning being more than just black box algorithms, giving readers an intuition of the capabilities but also limitations of machine learning, and how to apply those algorithms wisely. What's new in the second edition? SR: As time and the software world moved on after the first edition was released in September 2015, we decided to replace the introduction to deep learning via Theano. No worries, we didn't remove it! But it got a substantial overhaul and is now based on TensorFlow, which has become a major player in my research toolbox since its open source release by Google in November 2015. Along with the new introduction to deep learning using TensorFlow, the biggest additions to this new edition are three brand new chapters focussing on deep learning applications: A more detailed overview of the TensorFlow mechanics, an introduction to convolutional neural networks for image classification, and an introduction to recurrent neural networks for natural language processing. Of course, and in a similar vein as the rest of the book, these new chapters do not only provide readers with practical instructions and examples but also introduce the fundamental mathematics behind those concepts, which are an essential building block for understanding how deep learning works. What do you think is the most exciting trend in data science and machine learning? SR: One interesting trend in data science and machine learning is the development of libraries that make machine learning even more accessible. Popular examples include TPOT and AutoML/auto-sklearn. Or, in other words, libraries that further automate the building of machine learning pipelines. While such tools do not aim to replace experts in the field, they may be able to make machine learning even more accessible to an even broader audience of non-programmers. However, being to interpret the outcomes of predictive modeling tasks and being to evaluate the results appropriately will always require a certain amount of knowledge. Thus, I see those tools not as replacements but rather as assistants for data scientists, to automate tedious tasks such as hyperparameter tuning. Another interesting trend is the continued development of novel deep learning architectures and the large progress in deep learning research overall. We've seen many interesting ideas from generative adversarial neural networks (GANs), densely connected neural networks (DenseNets), and  ladder networks. Large profress has been made in this field thanks to those new ideas and the continued improvements of deep learning libraries (and our computing infrastructure) that accelerate the implementation of research ideas and the development of these technologies in industrial applications. How has the industry changed since you first started working? SR: Over the years, I have noticed that more and more companies embrace open source, i.e., by sharing parts of their tool chain in GitHub, which is great. Also, data science and open source related conferences keep growing, which means more and more people are not only getting interested in data science but also consider working together, for example, as open source contributors in their free time, which is nice. Another thing I noticed is that as deep learning becomes more and more popular, there seems to be an urge to apply deep learning to problems even if it doesn't necessarily make sense -- i.e., the urge to use deep learning just for the sake of using deep learning. Overall, the positive thing is that people get excited about new and creative approaches to problem-solving, which can drive the field forward. Also, I noticed that more and more people from other domains become more familiar with the techniques used in statistical modeling (thanks to "data science") and machine learning. This is nice, since good communication in collaborations and teams is important, and a given, common knowledge about the basics makes this communication indeed a bit easier. What advice would you give to someone who wants to become a data scientist? SR: I recommend starting with a practical, introductory book or course to get a brief overview of the field and the different techniques that exist. A selection of concrete examples would be beneficial for understanding the big picture and what data science and machine learning is capable of. Next, I would start a passion project while trying to apply the newly learned techniques from statistics and machine learning to address and answer interesting questions related to this project. While working on an exciting project, I think the practitioner will naturally become motivated to read through the more advanced material and improve their skill. What are the biggest misunderstandings and misconceptions people have about machine learning today? Well, there's this whole debate on AI turning evil. As far as I can tell, the fear mongering is mostly driven by journalists who don't work in the field and are apparently looking for catchy headlines. Anyway, let me not iterate over this topic as readers can find plenty of information (from both viewpoints) in the news and all over the internet. To say it with one of the earlier comments, Andrew Ng's famous quote: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars." What's so great about Python? Why do you think it's used in data science and beyond? SR: It is hard to tell which came first: Python becoming a popular language so that many people developed all the great open-source libraries for scientific computing, data science, and machine learning or Python becoming so popular due to the availability of these open-source libraries. One thing is obvious though: Python is a very versatile language that is easy to learn and easy to use. While most algorithms for scientific computing are not implemented in pure Python, Python is an excellent language for interacting with very efficient implementations in Fortran, C/C++, and other languages under the hood. This, calling code from computationally efficient low-level languages but also providing users with a very natural and intuitive programming interface, is probably one of the big reasons behind Python's rise to popularity as a lingua franca in the data science and machine learning community. What tools, frameworks and libraries do you think people should be paying attention to? There are many interesting libraries being developed for Python. As a data scientist or machine learning practitioner, I'd especially want to highlight the well-maintained tools from Python core scientific stack: -       NumPy and SciPy as efficient libraries for working with data arrays and scientific computing -       Pandas to read in and manipulate data in a convenient data frame format -       matplotlib for data visualization (and seaborn for additional plotting capabilities and more specialized plots) -       scikit-learn for general machine learning There are many, many more libraries that I find useful in my project. For example, Dask is an excellent library for working with data frames that are too large to fit into memory and to parallelize computations across multiple processors. Or take TensorFlow, Keras, and PyTorch, which are all excellent libraries for implementing deep learning models. What does the future look like for Python? In my opinion, Python's future looks very bright! For example, Python has just been ranked as top 1 programming language by IEEE Spectrum as of July 2017. While I mainly speak of Python from the data science/machine learning perspective, I heard from many people in other domains that they appreciate Python as a versatile language and its rich ecosystem of libraries. Of course, Python may not be the best tool for every problem, it is very well regarded as a "productive" language for programmers who want to "get things done." Also, while the availability of plenty of libraries is one of the strengths of Python, I must also highlight that most packages that have been developed are still being exceptionally well maintained, and new features and improvements to the core data science and machine learning libraries are being added on a daily basis. For instance, the NumPy project, which has been around since 2006, just received a $645,000 grant to further support its continued developed as a core library for scientific computing in Python. At this point, I also want to thank all the developers of Python and its open source libraries that have made Python to what it is today. It's an immensely useful tool to me, and as Python user, I also hope you will consider getting involved in open source -- every contribution is useful and appreciated, small documentation fixes, bug fixes in the code, new features, or entirely new libraries. Again, and with big thanks to the awesome community around it,  I think Python's future looks very bright.
Read more
  • 0
  • 0
  • 2340

article-image-most-of-the-problems-of-software-come-from-complexity-an-interview-with-max-kanat-alexander
Richard Gall
12 Oct 2017
13 min read
Save for later

"Most of the problems of software come from complexity": An interview with Max Kanat-Alexander

Richard Gall
12 Oct 2017
13 min read
Max Kanat Alexander understands software implicitly. He has spent his career not only working with it, but thinking about it too. But don't think for a moment that Max is an armchair philosopher. His philosophy has real-world consequences for all programmers, offering a way to write better code and achieve incredible results while remaining happy and healthy in an industry that can be exhausting.  In his new book Understanding Software Max explores a wide range of issues that should matter to anyone working with software - we spoke to him about it, and discussed his thoughts on how thinking about software can help us all.  You're currently working at Google. Could you tell us what that's like and what you're working on at the moment? Max Kanat-Alexander: I can't answer this question in a public setting without approval from Google PR. However, there is a public blog post that describes what I do and provides my title. Why simplicity is so important in software Your last book was called “Code Simplicity” – could you tell us exactly what that means and why you think it’s so important today? MKA: One of the things that I go over in that book, and that I cover also in my new book, is that most of the problems of software fundamentally come from code complexity. Even when it doesn't seem like they do, if you trace down most problems far enough, you'll see that they never would have happened if there hadn't been so much code complexity. This isn't obvious to everybody, though, so I wrote a book that provides a long reasoned argument that explains (a) the fundamental laws of software design and (b) hopefully brings the reader to understanding why simplicity (and maintaining that simplicity) is so important for software. I figured that one of the primary causes of complexity was simply the lack of full and complete understanding by every programmer of what complexity really is, where it comes from, why it's important, and what the simple steps are that you take to handle it. And even now, when I go back to the book myself, I'm always surprised that there are so many answers to the problems I'm facing now. When you've discovered the fundamental laws of software development, it turns out that they do in fact resolve problems even long after their discovery. Do you think you can approach code the same way whatever languages or software you’re using? Is there a philosophy you think any engineer can adopt? Or is flexibility key? MKA: There are fundamental laws and principles that are true across any language. Fundamentally, a language is a way of representing a concept that the programmer has and wants to communicate to a computer. So there are ways of structuring concepts, and then there are ways of structuring things in a language. These both have rules, but the rules and principles of structuring concepts are the senior principles over the rules for structuring things in a language, because the rules for how you organize or handle a concept apply across any language, any computer, any set of tools, etc. Theoretically, there should be an ideal way to represent any particular set of concepts, but I'm not sure that any of our languages are there yet. The philosophy I've expressed in Code Simplicity and now in Understanding Software is entirely a universal philosophy that applies to all software development. I don't generally write about something if you can only use it in one language or with one framework. There is enough of that sort of writing out in the world, and while it's really valuable, I don't feel like it's the most valuable contribution that I personally have to bring to the world of software development. Since I have a background in some types of philosophy as well as a lot of experience doing design (and extensive refactoring) across many languages on many large projects, there's a universal viewpoint that I've tried to bring and that a lot of people have told me has helped them. To answer your last question, when you say the word "flexibility," you're in dangerous territory, because a lot of people interpret that as meaning that you should write endless generic code up front even if that doesn't address the immediate requirements of your system. This leads to a lot of code complexity, particularly in larger systems, so it's not a good idea. But some people interpret the word "flexibility" that way. For a more nuanced view, you kind of have to read all of Code Simplicity and then see some of the newer content in Understanding Software, too. Are there any languages where code simplicity is particularly important? Or any in which it is challenging to keep things simple? MKA: More than for a particular language, I would say it's important for the size and longevity of a project. Like, if I'm writing a one-off script that I'm only going to use for the next five minutes, I'll take a lot of shortcuts. But if I'm working on a multi-million line codebase that is used by millions of people, simplicity is of paramount importance. There are definitely languages in which it's more challenging to keep things simple. I've made this statement before, and there are a lot of amazing developments in the Perl community since I first made it, but it was a lot of work to keep the Bugzilla Project's Perl codebase healthy when I worked on it--more so than with other languages. Some of that had to do with the design of that particular codebase, but also that Perl allows you to accomplish the same thing in so many different ways. It led to a lot more thorough and time-consuming code reviews where we would frequently have to tell people to do things in the way that our codebase does them, or the particular way that we'd learned was the most maintainable through hard-earned experience. It did still result in a well-designed codebase, and you can write well-designed Perl, but it required more language experience and more attention in code review than I've experienced when writing in other languages. Bash is similar in some ways--I once maintained a several-thousand-line Bash program, and that was pretty challenging in terms of simplicity.  Languages aren't equal--some are better than others for some types of tasks. There are tasks for which I love Perl and Bash, and others for which Java, Python, or Go are definitely more suitable. Some of this has to do with the suitability of a particular tool to a particular task, but more of it has to do with things like how much structure the language allows you to place around your code, how much consistency the language libraries have, etc. What makes code bad? What, in your opinion, makes code ‘bad’? And how can we (as engineers and developers) identify it? MKA: It's bad if it's hard to read, understand, or modify. That's the definition of "complex" for code. There's a lot to elaborate on there, but that's the basic idea. What do you think the main causes of ‘bad code’ are? Lack of understanding on the part of the programmer working on the system. A failure of the programmer to take sufficient responsibility for enough of the system. A simple unawareness of the problem. On your blog you talk a lot about how code is primarily ‘human’. Could you elaborate on that and what it means for anyone that works in software?  Sure. It’s people that write software, people that read it, and people that use it. The whole purpose of software is actually to help people, not to help computers or network cables. When you're looking at solving the problems of software development, you have to look at it as a human discipline--something that people do, something that has to do with the mind or the being or the perceptions of individuals. Even though that might sound a bit wishy-washy, it's absolutely true. If you think of it purely as being about the computer, you end up writing compilers and performance tools (which are great) but you don't solve the actual problems that programmers have. Because it's the programmers or the users who have problems--not the machines, for the most part. You have to talk to people, find out what's going on with them. Just as one example, one of the best ways to detect complexity is to ask people for negative emotional reactions to code. "What part of this codebase is the most frustrating?" "What part of this codebase are you the most afraid to touch because you think you'll break something?" Questions like that. And those are basically questions about people, even though you're getting data about code. Because when you come down to it, the whole reason you're getting data about code has something to do with people. You can't lose sight of the fact that the problem you're resolving is code complexity, but at the same time, you also can't forget the reason you're resolving it, which is to help programmers. How has your thinking around code changed over the years? Are there any experiences that stand out as important in shaping your philosophy today? MKA: I've always been fairly meticulous in terms of how I want to approach code. However, it's also important that that meticulousness delivers a product. There have been times, particularly in my early coding career, when I would go to clean up some codebase or improve something only to find that nobody actually wanted the end result. That is, you can work on a project for a long time that nobody actually ends up using or caring about. So one of the lessons that I learned, which is expressed in both Code Simplicity and Understanding Software, is that your software has to actually help somebody or you're not going to actually end up being very happy with it, even if you enjoy the process of working on it. Also, the more time I spend working with groups of engineers as opposed to working alone or in distributed teams, the more I learn about how to communicate the basic principles of software development to other engineers. I think that Code Simplicity did a pretty good job, but weirdly, sometimes you can be too simple for some people. That is, if you get too fundamental with your explanations, then sometimes people don't see how to use the information or how it applies to their life. Sometimes you have to get a bit more complex than the fundamentals, explain a bit more about how to use the data or how it might apply in some specific situation, because some programmers are very focused on their specific problem in present time. That is, they don't want to hear about how to solve all problems like the one they're having - they can't even listen to that or really digest it. So instead you have to frame your explanation in terms of how it applies to this specific problem, which--when you're coming from the fundamental laws of software design all the way up to some extreme complexity that some programmer has boxed themselves into--can become quite difficult to fully communicate. It's also tough because they often think they already know the fundamentals, even though the problem they're having clearly indicates that they don't. So you have to learn how to communicate around those barriers. Working on the Bugzilla Project was a significant influence in how I think about software development, because I proved that you really can refactor a legacy codebase that's gotten into a very bad or very complex state, and that that has significant effects on the product. I think that's one of the things that's been missing in other attempts or writings, is that there's a lot of talk about how code quality is important, but I (and the people who worked with me on Bugzilla) actually went through the process of making significant impacts on products and users over a period of many years by focusing almost purely on code quality and applying the basic principles of software design. There have been lots of other experiences that have been an influence. I think that I generally try to learn something from every encounter I have with programming or software engineers. It's dangerous to think that you already know everything already--then you stop learning. 3 things to take away from Understanding Software What are the three things you want people to take away from Understanding Software? Well, I hope that they take at least as many things away from the book as there are chapters in the book. But in general, I'd like people to have some good ideas about how to handle problems that they run into in terms of code complexity. I'd also like them to understand more about the fundamental reasons that we handle code complexity, as well as the rules around it. I'd like people to see the basic philosophy of software development as something that they can think with themselves, not just something that somebody else has said, and I'd like them to be able to use that basic philosophy to resolve the problems of their software development environments. And of course, I'd like people to come away from the book being much better and faster programmers than they were before.  3 ways engineers can be more productive What 3 tips can you give to developers and engineers who want to be more productive?   1. Understand as much as you can about software, the project you're working on, etc. See the chapter "Why Programmers Suck" in Understanding Software for all the details on this and other things that you should learn about.   2. Try to take responsibility beyond the normal bounds of your code. That is, if you're normally only responsible for one tiny part of the system, also start thinking about how you could improve the parts of the system that talk to your part. Think about the consumers of your code and work to refactor them as well, when necessary. Keep expanding your sphere of responsibility over your codebase over time, even to the point of helping out with the tools and languages that you use to develop software, eventually.   3. Stop thinking so much about your code and start coding more. Or do drawings, or take a walk, or talk to somebody, or something. See the chapter "Stop Thinking: The Secret of Fast Programming" in Understanding Software for the full details on this.
Read more
  • 0
  • 0
  • 2130