Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

121 Articles
article-image-kenneth-fukizi-on-the-new-blazor-framework-grpc-support-and-other-exciting-features-in-asp-net-core-3-0
Vincy Davis
30 Dec 2019
9 min read
Save for later

Kenneth Fukizi on the new Blazor framework, gRPC support, and other exciting features in ASP.NET Core 3.0

Vincy Davis
30 Dec 2019
9 min read
The open-source framework ASP.NET Core is one of the most popular web frameworks, developed by Microsoft and its community. The modular framework runs on both the full .NET Framework, on Windows, and the cross-platform .NET Core. One of the most captivating features of this framework is its performance. It is not only faster than other web frameworks but is also perfectly suited for Docker containers. In November, Microsoft released the new version of ASP.NET Core with many exciting features. To understand the advances in ASP.NET Core, more closely, we interviewed Kenneth Y. Fukizi, the author of the book ‘Learn ASP.NET Core 3.0, Second edition’.  Along with ASP.NET Core, Kenneth has also shared his thoughts on the latest version of .NET Core, and the now available Entity Framework Core 3.0 and Entity Framework 6.3. He says that he is personally most excited about the new Blazor framework as it allows him to avoid JavaScript. He also opines that Blazor will give developers a chance to specialize in Microsoft technologies. Kenneth also adds that ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC in this new release. Kenneth’s take on the features in ASP.NET Core 3.0 In your book, ‘Learning ASP.NET Core 3.0, Second edition’, you say that ‘Model View Controller’ and ‘Entity Framework Core 3’ are the most widely used frameworks in ASP.NET Core. Can you elaborate on this? How do these frameworks enhance web application performance? Pretty much every worthwhile application out there will need to have some interface for user interaction and persist some form of data.  The Model View Controller is almost a default choice for may web application developers, mainly for its tried and tested versatility and its ability to separate concerns, allowing front-end developers to work on the views while back end developers are busy with their part. This allows for fast application development.  When persisting some data, Entity Framework core 3.0 is often the choice for most application developers on ASP.Net Core, because it is a safer option from a horde of other OR/M engines, since it is developed and maintained by Microsoft itself. Even though some time back it was found wanting in some areas compared to versatile OR/Ms like Nhibernate, Entity Framework Core has addressed its gaps quickly and effectively continues to grow in areas it was behind on. Its seamless integration with the rest of the framework makes it an easy choice for many application developers. ASP.Net Core MVC supports asynchronous calls, thereby releasing unnecessarily held resources and in turn increasing application performance. Since everything is logically separated into groups, in other words, high cohesion and low coupling, it increases the application performance on the whole.  If we talk about the advantages it gives to the application developer, there are many, and most of these allow for a developer to have code that does not repeat itself unnecessarily, code that has low coupling, allowing for everything to be tested individually. In general, it allows for an application to have efficient code more easily and in turn, makes the application more performant. (Read chapter 4 and 5 of my book for understanding all the basic concepts of ASP.NET Core 3.0). ASP.NET Core 3.0 has introduced a new framework called Blazor for building interactive client-side web UI with .NET. What are your thoughts on it? This is definitely a game-changer. I’m personally excited that I have an option to use Blazor instead of JavaScript. It gives a chance for a developer to specialize in Microsoft technologies and be truly full-stack, all within the MS tech stack.  For a typical back end C# developer, when they get familiar with C# syntax, type safety, and the environment in general, it becomes easier to just work with one language across the stack, whether it is backend, or frontend. Client-side Blazor with its Ahead Of Time (AOT) Compilation directly to WebAssembly will make client applications super fast, and it’s something to definitely look out for. (Chapter 6 of my book gives a detailed explanation on Blazor). The latest release of ASP.NET Core also supports gRPC and ships with templates and tools for building gRPC services. What are the advantages and disadvantages that gRPC will offer to ASP.NET Core 3 users? Also, what are your thoughts on the gRPC built-in security features? ASP.Net Core users should be looking forward to the high performance and scalability that comes with gRPC. With gRPCs, a contract-first approach to API development is possible, documentation for projects will become easier, making it easier for different teams on the same project to be able to communicate better, with consistent API models as well. Reusability will be enhanced and encouraged by the gRPC templates. For applications using Microservices, it will be advantageous in terms of security and performance to use gRPC mainly for communication. Since gRPC uses protocol buffers, as opposed to REST services, they need to be exposed to users in JSON or XML via HTTP.  The HTTP/2 protocol that gRPC uses is more efficient than its counterpart HTTP 1.1. With gRPC templates, we are able to have messages that flow in a bi-directional way, increasing efficiency, and in case of an event, we can do cancellations of sent requests. There are indeed some disadvantages to using gRPC templates including the fact that it now becomes a duty to maintain specification files, which are an integral part of the template. If you have different teams you will have to agree first on specifications before you even start to implement anything, and that can make things slow in terms of getting the application development done. It is great to see that gRPC has visibly built-in considerations for TLS/ SSL and that it makes sure that all its communications are first authenticated and encrypted. This goes a long way in not only preventing but acts as a deterrent to intended attacks on application services. Have you had a chance to explore .NET Core 3.0, and the now available Entity Framework Core 3.0 and Entity Framework 6.3? What are you most excited about this new release? Yes, I have managed to explore .Net Core 3.0, and admittedly with great pleasure to do so. Some of the most exciting features I have seen the introduction of support for Cosmos DB and C# 8, and that in itself makes a whole lot of development easier for global applications. LINQ has always been quite handy to me and am sure many developers, but there have been scenarios where some queries have not been as efficient I’d want them, and to hear and see that it has been re-architected in many ways to produce more efficient queries, it is absolutely exciting, and I would love to uncover more of its query capabilities aimed to be improved upon in the future.  Nullable reference types introduced for both C#8 and Entity Framework Core 3.0, will make my life simpler as a developer. I’m also excited that Entity Framework 6.3 is the first version of Entity Framework 6 that will be able to run on .Net Core, and this will make it simpler for migrating older applications that were using Entity Framework 6, onto the .Net Core platform. (Chapter 9 of the book gives details about accessing data using Entity Framework Core 3). We also asked Kenneth about his preferred way of improving the speed and rendering performance of any web application. Bundling which combines multiple files into a single file reduces the size of the JavaScript or CSS file by removing white space and commented code without altering any functionality. On the other hand, minification can perform multiple varieties of different code optimizations to scripts and CSS, thus resulting in smaller payloads.  When combined together, they both can improve the load time performance of an application by reducing the number of requests to the server and reducing the size of the requested assets.  My personal way of doing things, when I need to use some functionality from a package, is first to look internally within the development framework. Only when an implementation of that functionality cannot be found, or when it is evident of its deficiencies with regards to the task at hand, do I go outside of the primary provider Microsoft. Therefore, it is only natural that I prefer to use the out of the box solution provided by both the MVC and Razor pages that makes use of bundleconfig.json I have at times gone for the sophistication that is in Grunt, and in Webpack, but there being relatively a bit more complex naturally makes the inbuilt functionality as the first option, as long as it doesn’t horribly fail. About the Book  ‘Learn ASP.NET Core 3.0, Second edition’ will help you become highly efficient in developing and maintaining powerful web applications. It will also guide you to deploy and monitor the applications using Microsoft Azure, AWS, and Docker. About the Author Kenneth Y. Fukizi is a solutions architect, consultant, software developer and engineer with more than 14 years of professional experience. He is a Microsoft Certified Trainer®, Microsoft Certified Solutions Developer®, Microsoft Certified Solutions Associate®, Microsoft Certified Professional®, among other professional and technical certifications.  Kenneth also lectures and mentors computer science degree students in programming. He has spent most of his professional life working as a software engineering contractor/consultant on various projects for client organizations based in South Africa, Australia, U.S.A, and Canada. .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Inspecting APIs in ASP.NET Core [Tutorial] An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 6592

article-image-honeycomb-ceo-charity-majors-discusses-observability-and-dealing-with-the-coming-armageddon-of-complexity-interview
Richard Gall
13 Mar 2019
16 min read
Save for later

Honeycomb CEO Charity Majors discusses observability and dealing with "the coming armageddon of complexity" [Interview]

Richard Gall
13 Mar 2019
16 min read
Transparency is underrated in the tech industry. But as software systems grow in complexity and their relationship with the real world becomes increasingly fraught, it nevertheless remains a value worth fighting for. But to effectively fight for it, it’s essential to remember that transparency is a technological issue, not just a communication one. Decisions about how software is built and why it’s built in the way that it is lie at the heart of what it means to work in software engineering. Indeed, the industry is in trouble if we can’t see just how important those questions are in relation to everything from system reliability to our collective mental health. Observability, transparency, and humility One term has recently emerged as a potential solution to these challenges: observability (or o11y as it's known in the community). This is a word that has been around for some time, but it’s starting to find real purchase in the infrastructure engineering world. There are many reasons for this, but a good deal of credit needs to go to observability platform Honeycomb and its CEO Charity Majors. [caption id="attachment_26599" align="alignleft" width="225"] Charity Majors[/caption] Majors has been a passionate advocate for observability for years. You might even say Honeycomb evolved from that passion and her genuine belief that there is a better way for software engineers to work. With a career history spanning Parse and Facebook (who acquired Parse in 2011), Majors is well placed to understand, diagnose, and solve the challenges the software industry faces in terms of managing and maintaining complex distributed systems designed to work at scale. “It’s way easier to build a complex system than it is to run one or to understand one,” she told me when I spoke to her in January. “We’re unleashing all these poorly understood complex systems on the world, and later having to scramble to make sense of it.” Majors is talking primarily about her work as a systems engineer, but it’s clear (to me at least) that this is true in lots of ways across tech, from the reliability of mobile apps to the accuracy of algorithms. And ultimately, impenetrable complexity can be damaging. Unreliable systems, after all, cost money. The first step, Majors suggests, to counteracting the challenges of distributed systems, is an acceptance of a certain degree of impotence. We need humility. She talks of “a shift from an era when you could feel like your systems were up and working to one where you have to be comfortable with the fact that it never is.” While this can be “uncomfortable and unsettling for people in the beginning,” in reality it’s a positive step. It moves us towards a world where we build better software with better processes. And, most importantly, it cultivates more respect for people on all sides - engineers and users. Charity Majors’ (personal) history of observability Observability is central to Charity Majors’ and Honeycomb’s purpose. But it isn’t a straightforward concept, and it’s also one that has drawn considerable debate in recent months. Ironically, although the term is all about clarity, it has been mired in confusion, with the waters of its specific meaning being more than a little muddied. “There are a lot of people in this space who are still invested in ‘oh observability is a generic synonym for telemetry,’” Majors complains. However, she believes that “engineers are hungry for more technical terminology,” because the feeling of having to deal with problems for which you are not equipped - quite literally - is not uncommon in today’s industry. With all the debate around what observability is, and its importance to Honeycomb, Majors is keen to ensure its definition remains clear. “When Honeycomb started up… observability was around as a term, but it was just being used as a generic synonym for telemetry… when we started… the hardest thing was trying to think about how to talk about it... because we knew what we were doing was different,” Majors explains. Experimentation at Parse The route to uncovering the very specific - but arguably more useful - definition of observability was through a period of sustained experimentation while at Parse. “Around the time we got acquired... I was coming to this horrifying realisation that we had built a system that was basically un-debuggable by some of the best engineers in the world.” The key challenge for Parse was dealing with the scale of mobile applications. Parse customers would tell Majors and her team that the service was down for them, underlining Parse’s monitoring tools’ lack of capability to pick up these tiny pockets of failure (“Behold my wall of dashboards! They’re all green, everything is fine!” Majors would tell them). Scuba: The “butt-ugly” tool that formed the foundations of Honeycomb The monitoring tools Parse was using at the time weren’t that helpful because they couldn’t deal with high-cardinality dimensions. Put simply, if you wanted to look at things on a granular, user by user basis, you just couldn’t do it. “I tried everything out there… the one thing that helped us get a handle on this problem was this butt-ugly tool inside Facebook that was aggressively hostile to users and seemed very limited in its functionality, but did one thing really well… it let you slice and dice in real time on dimensions of arbitrarily high cardinality.” Despite its shortcomings, this set it apart from other monitoring tools which are “geared towards low cardinality dimensions,” Majors explains. [caption id="attachment_26601" align="alignright" width="225"] More than just a quick fix (Credit: Charity Majors)[/caption] So, when you’re looking for “needles in a haystack,” as Parse engineers often were, the level of cardinality is essential. “It was like night and day. It went from hours, days, or impossible, to seconds. Maybe a minute.” Observability: more than just a platform problem This experience was significant for Majors and set the tone for Honeycomb. Her experience of working with Scuba became a frame for how she would approach all software problems. “It’s not even just about, oh the site is down, debug it, it’s, like, how do I decide what to build?” It had, she says, “become core to how I experienced the world.” Over the course of developing Honeycomb, it became clear to Majors that the problems the product was trying to address were actually deep: “a pure function of complexity.” “Modern infrastructure has become so ephemeral you may not even have servers, and all of our services are far flung and loosely coupled. Some of them are someone else’s service,” Majors says. “So I realise that everyone is running into this problem and they just don’t have the language for it. All we have is the language of monitoring and metrics when... this is inherently a distributed systems problem, and the reason we can’t fix them is because we don’t have distributed systems tools.” Towards a definition of observability Looking over my notes, I realised that we didn’t actually talk that much about the definition of observability. At first I was annoyed, but in reality this is probably a good thing. Observability, I realised, is only important insofar as it produces real world effects on how people work. From the tools they use to the way they work together, observability, like other tech terms such as DevOps, only really have value to the extent that they are applied and used by engineers. [caption id="attachment_26606" align="alignleft" width="225"] It's not always easy to tell exactly what you're looking at (Credit: Charity Majors)[/caption] “Every single term is overloaded in the data space - every term has been used - and I was reading the dictionary definition of the word ‘observability’ and... it’s from control systems and it’s about how much can you understand and reason about the inner workings of these systems just by observing them from the outside. I was like oh fuck, that’s what we need to talk about!” In reality, then, observability is a pretty simple concept: how much can you understand and reason about the inner workings of these systems just by observing them from the outside. Read next: How Gremlin is making chaos engineering accessible [Interview] But things, as you might expect, get complicated when you try and actually apply the concept. It isn’t easy. Indeed, that’s one of the reasons Majors is so passionate about Honeycomb. Putting observability into practice Although Majors is a passionate advocate for Honeycomb, and arguably one of its most valuable salespeople, she warns against the tendency for tooling to be viewed as silver bullet solutions to problems. “A lot of people have been sold this magic spell idea which is that you don’t have to think about instrumentation or explaining your code back to yourself” Majors says. Erroneously, some people will think they “can just buy this tool for millions of dollars that will do it for you… it’s like write code, buy tool, get magic… and it doesn’t actually work, it never has and it never will.” This means that while observability is undoubtedly a tooling issue, it’s just as much a cultural issue too. With this in mind, you definitely shouldn’t make the mistake of viewing Honeycomb as magic. “It asks more of you up front,” Majors says. “There is no magic. At no point in the future are you going to get to just write code and lob it over the wall for ops to deal with. Those days are over, and anyone who is telling you anything else is selling you some very expensive magic beans. The systems of the future do require more of developers. They ask you to care a little bit more up front, in terms of instrumentation and operability, but over the lifetime of your code you reap that investment back hundreds or thousands of times over. We're asking you, and helping you, make the changes you need to deal with the coming Armageddon of complexity.” Observability is important, but it’s a means to an end: the end goal is to empower software engineers to practice software ownership. They need to own the full lifecycle of their code. How transparency can improve accountability Because Honeycomb demands more ‘up front’ from its users, this requires engineering teams to be transparent (with one another) and fully aligned. Think of it this way: if there’s no transparency about what’s happening and why, and little accountability for making sure things do or don’t happen inside your software, Honeycomb is going to be pretty impotent. We can only really get to this world when everyone starts to care properly about their code, and more specifically, how their code runs in production. “Code isn’t even interesting on its own… code is interesting when users interact with it,” Majors says. “it has to be in production.” That’s all well and good (if a little idealistic), but Majors recognises there’s another problem we still need to contend with. “We have a very underdeveloped set of tools and best practices for software ownership in production… we’ve leaned on ops to… be just this like repository of intuition… so you can’t put a software engineer on call immediately and have them be productive…” Observability as a force for developer well-being This is obviously a problem that Honeycomb isn’t going to fix. And yes, while it’s a problem the Honeycomb marketing team would love to fix, it’s not just about Honeycomb’s profits. It’s also about people’s well being. [caption id="attachment_26602" align="alignright" width="300"] The Honeycomb team (Credit: Charity Majors)[/caption] “You should want to have ownership. Ownership is empowering. Ownership gives you the power to fix the thing you know you need to fix and the power to do a good job… People who find ownership is something to be avoided - that’s a terrible sign of a toxic culture.” The impact of this ‘toxic culture’ manifests itself in a number of ways. The first is the all too common issue of developer burnout. This is because a working environment that doesn’t actively promote code ownership and accountability, leads to people having to work on code they don’t understand. They might, for example, be working in production environments they haven’t been trained to adequately work with. "You can’t just ship your code and go home for the night and let ops deal with it," Majors asserts. "If you ship a change and it does something weird, the best person to find that problem is you. You understand your intent, you have all the context loaded in your head. It might take you 10 minutes to find a problem that would take anyone else hours and hours." Superhero hackers The second issue is one that many developers will recognise: the concept of the 'superhero hacker'. Read next: Don’t call us ninjas or rockstars, say developers “I remember the days of like… something isn’t working, and we’d sit around just trying random things or guessing... it turns out that is incredibly inefficient. It leads to all these cultural distortions like the superhero hacker who does the best guessing. When you have good tooling, you don’t have to guess. You just look and see.” Majors continues on this idea: “the source of truth about your systems can’t live in one guy’s head. It has to live in a tool where everyone has access to the same information about the system, one single source of truth... Otherwise you’re gonna have that one guy who can’t go on vacation ever.” While a cynic might say well she would say that - it’s a product pitch for Honeycomb, they’d ultimately be missing the point. This is undoubtedly a serious issue that’s having a severe impact on our working lives. It leads directly to mental health problems and can even facilitate discrimination based on gender, race, age, and sexuality. At first glance, that might seem like a stretch. But when you’re not empowered - by the right tools and the right support - you quite literally have less power. That makes it much easier for you to be marginalized or discriminated against. Complexity stops us from challenging the status quo The problem really lies with complexity. Complexity has a habit of entrenching problems. It stops us from challenging the status quo by virtue of the fact that we simply don’t know how to. This is something Majors takes aim at. In particular, she criticises "the incorrect application of complexity to the business problem it solves." She goes on to say that “when this happens, humans end up plugging the dikes with their thumbs in a continuous state of emergency. And that is terrible for us as humans." How Honeycomb practices what it preaches Majors’ passion for what she believes is evidenced in Honeycomb's ethos and values. It’s an organization that is quite deliberately doing things differently from both a technical and cultural perspective. [caption id="attachment_26604" align="alignright" width="300"] Inside the Honeycomb HQ (Credit: Charity Majors)[/caption] Majors tells me that when Honeycomb started, the intention was to build a team that didn’t rely upon superstar engineers: “We made the very specific intention to not build a team of just super-senior expert engineers - we could have, they wanted to come work with us, but we wanted to hire some kids out of bootcamp, we wanted to hire a very well rounded team of lots of juniors and intermediates... This was a decision that I made for moral reasons, but I honestly didn’t know if I believed that it would be better, full disclosure - I honestly didn’t have full confidence that it would become the kind of high powered team that I felt so proud to work on earlier in my career. And yet... I am humbled to say this has been the most consistent high-performing engineering team that I have ever had the honor to work with. Because we empower them to collaborate and own the full lifecycle of their own code.” Breaking open the black boxes that sustain internal power structures This kind of workplace, where "the team is the unit you care about" is one that creates a positive and empowering environment, which is a vital foundation for a product like Honeycomb. In fact, the relationship between the product and the way the team works behind it is almost mimetic, as if one reflects the other. Majors says that "we’re baking" Honeycomb's organizational culture “into the product in interesting ways." [caption id="attachment_26603" align="alignleft" width="300"] Teamwork (Credit: Charity Majors)[/caption] She says that what’s important isn’t just the question of “how do we teach people to use Honeycomb, but how do we teach people to feel safe and understand their giant sprawling distributed systems. How do we help them feel oriented? How do we even help them feel a sense of safety and security?"   Honeycomb is, according to Majors, like an "outsourced brain." It’s a product that means you no longer need to worry about information about your software being locked in a single person’s brain, as that information should be available and accessible inside the product. This gives individuals safety and security because it means that typical power structures, often based on experience or being "the guy who’s been there the longest" become weaker. Black boxes might be mysterious but they're also pretty powerful. With a product like Honeycomb, or, indeed, the principles of observability more broadly, that mystery begins to lift, and the black box becomes ineffective. Honeycomb: building a better way of developing software and developing together In this context, Liz Fong-Jones’ move to Honeycomb seems fitting. Fong-Jones (who you can find on Twitter @lizthegrey) was a Staff SRE at Google and a high profile critic of the company over product ethics and discrimination. She announced her departure at the beginning of 2019 (in fact, Fong-Jones started at Honeycomb in the last week of February). By subsequently joining Honeycomb, she left an environment where power was being routinely exploited, for one where the redistribution of power is at the very center of the product vision. Honeycomb is clearly a product and a company that offers solutions to problems far more extensive and important than it initially thought it would. Perhaps we’re now living in a world where the problems it’s trying to tackle are more profound than they first appear. You certainly wouldn’t want to bet against its success with Charity Majors at the helm. Follow Charity Majors on Twitter: @mipsytipsy Learn more about Honeycomb and observability at honeycomb.io. You can try Honeycomb for yourself with a free trial.
Read more
  • 0
  • 0
  • 6542

article-image-how-two-junior-intuit-engineers-helped-their-team-adopt-kotlin-within-a-month
Richard Gall
25 Nov 2019
6 min read
Save for later

How two junior Intuit engineers helped their team adopt Kotlin within a month

Richard Gall
25 Nov 2019
6 min read
Change might feel like a natural part of working in the software industry. But in truth it's not natural at all; it takes a hell of a lot of effort to do things differently. That's what software developers Shelby Cohen and Katie Levy found out when they decided that Kotlin could be a better programming language option when it came to their company's engineering team. As two relatively junior developers at financial software provider Intuit, Shelby and Katie didn't only have to take on the challenge of building out training programs and providing resources to thousands of developers across Intuit, they also had to negotiate internal hierarchies and politics that can prove resistant to change. To learn more about what this process was like, as well as why they're so passionate about Kotlin, I spoke to them over email. Visit the Packt store to explore Kotlin eBooks and videos. How did you get started in software engineering? Shelby Cohen: I’ve always been interested in solving problems and engaging with new challenges. In high school I really enjoyed math and one of my math teachers encouraged me to join the High School Robotics Club. It was all men and I didn’t feel like I fit in. With the help and encouragement of my teacher I helped start the Women Robotics Club at my high school. This is where I was exposed to programming for the first time and that inspired me to study computer science in college. During my junior year, I learned about Intuit’s co-op program at a hackathon and thought it was a great opportunity. I then travelled from New York to San Diego to spend a semester working at Intuit. I learned so much and was exposed to a lot of mentorship opportunities which led to me accepting a full time position as a Software Engineer I in 2017. Intuit really values teaching their employees and helping them continue to grow and develop so it felt like a natural fit. Why Kotlin? What’s unique about Kotlin? Katie Levy: Kotlin is an open source, cross-platform programming language, designed to interoperate with Java. What’s nice about it is that it allows for an easy transition to start using a functional programming style while also being safer and more concise than Java code. Kotlin is unique because it is very clean, simple, clear and removes a lot of the redundant, boilerplate code that’s in Java which allows the developer to focus on the business logic. It’s my favorite language to program in as I can write high quality code faster by using its built-in language features. It can be used on any application running on the JVM, including Spring boot backend services, Android apps, and even JavaScript applications. How/where did you learn about it? Was it an immediate thing or did it take time for you to decide to do this? Shelby: I volunteered at KotlinConf back in 2018, and it was such a great opportunity to meet a lot of the engineers and staff from Jetbrains. I got to meet a senior executive at Jetbrains and shared some of the projects I was working on at Intuit. He asked to be a guest on his podcast, TalkingKotlin, and through this conference I got to know some of the most influential people in the Kotlin community. How did people respond? Were they resistant to something new? Katie: My team was definitely hesitant to start learning the language. One of the engineers on my team was especially resistant and tried to identify flaws in the language, using anything he could come up with as a reason why we shouldn’t use Kotlin. In those cases it’s important to identify the real issue the engineers are having with the new language — is it the language itself or is it something else? With that particular engineer, I found out that he was feeling like he didn’t have the bandwidth to take on the work he was being assigned. To combat this, I created a training program for the team so we could learn together, build up the team’s domain knowledge on the language, and so everyone’s workload was more visible. How did you go about driving adoption? Katie: Influencing and driving change is a hard project to scope. It can mean many different things to different people. For us, when we were starting out, we wanted to introduce 500 engineers to Kotlin, and wanted 90% of them to start coding in Kotlin. We ended up exceeding our goal, reaching 370 engineers internal to Intuit and 4,329 external to Intuit. We want to improve the industry by encouraging engineers to develop in a more concise and less error-prone language. More recently, we were able to present on Kotlin to over 500 software engineers at LambdaWorld in Spain. Afterward, we had engineers wanting to take pictures with us and all saying they want to start using the language. We found that speaking at conferences helped us meet our goals, and scale our efforts. What were the challenges? Shelby: In addition to some of the initial resistance, one of the biggest challenges is the internal hierarchy. When I introduced Kotlin to my team, a lot of engineers were more senior than me and at the time, I wasn’t confident about the value that I was bringing to the team. I implemented group code reviews, sent out resources, and walked the team through examples as they were learning Kotlin. Once the team had a good foundation for Kotlin, I implemented a flatter teaching structure and encouraged everyone to learn and teach each other a specific part of the language. This was really effective because everyone learns from different teaching styles and team members felt more empowered in their work. Have you learned anything else about Kotlin throughout the process? And about engineering in general? Shelby: One of the most valuable things I learned from this experience is don’t be afraid to ask for advice on how to influence at scale and connect with others who have driven change at your company. This way you can learn from other’s experiences. Katie and I reached out to a lot of leaders making an impact in their area of expertise to learn from them and get feedback and advice on our journey. This is something we will continue to do as we keep learning and facing other challenging problems. Thanks to Shelby and Katie for talking to us - it's clear they have a lot of passion for Kotlin, but more importantly they also have a great sense of how to engage and support other developers. Follow Shelby on Twitter: @shelbyc0hen Follow Katie on Twitter: @klevy110
Read more
  • 0
  • 0
  • 6347

article-image-how-gremlin-is-making-chaos-engineering-accessible-interview
Richard Gall
14 Jun 2018
10 min read
Save for later

How Gremlin is making chaos engineering accessible [Interview]

Richard Gall
14 Jun 2018
10 min read
Despite considerable hype, chaos engineering doesn’t appear to have yet completely captured the imagination of the wider software engineering world. According to this year’s Skill Up survey, when asked, only 13% of developers said they were excited about it. But that doesn’t mean we should disregard - far from it. Like many of the best trends, it might blow up when we least expect. It might find its way onto your CTOs eyes in just a few months. As site reliability engineering grows as a discipline, and as businesses start to put a value on downtime, chaos engineering is likely to become a big part of the reliability and resilience toolkit. Gremlin, chaos engineering, and the end of the age of downtime “People are expected to always be up” says Matt Fornaciari, co-founder and CTO of Gremlin, a product that offers “failure as a service” to businesses. I spoke to Fornaciari last month to get a deeper insight on Gremlin and the team and ideas behind it. He believes the world has changed in recent years, and the days of service windows when sites would just be taken down for an hour or two for an update or change is over: “that’s unacceptable to people these days.” Fornaciari isn’t an unbiased observer, of course. The success of Gremlin depends on chaos engineering’s adoption and acceptance. However, he’s not going out on a limb; there’s clear VC interest in Gremlin. At the end of 2017 the company received their first round of funding - more than 7 million USD. It’s a cliche but money does talk - and in this instance it seems to be saying that this approach might change the way we think about building our software. Arguably, chaos engineering - and by extension Gremlin - is a response to other trends in software. “I’ve seen a lot of signals that this is the way the world’s going”, Fornaciari says. He’s referring here to broader trends like cloud and microservices. He explains that because microservices is all about modularity, and breaking aspects of your software infrastructure into smaller pieces “you end up with nodes in this network” which “adds network complexity.” Consequently, this additional complexity means there is more that can go wrong - it becomes more unreliable. Gremlin’s bid to democratize chaos engineering It’s important to note here that chaos engineering has been around for some time - it’s not a radically new methodology. But it’s largely been locked away in some of the world’s biggest tech companies, like Netflix and Amazon. Many of Gremlin’s leaders actually worked at those companies - Fornaciari has worked at Salesforce and Amazon, for example. “The main goal was to democratize chaos engineering… we’ve [the Gremlin team] done it at the bigger companies and we’re like you know what, everyone can benefit from this”. That is the essential point around chaos engineering. If it’s going to catch on in the mainstream tech world, it needs to be more accessible to different businesses. Fornaciari explains that many of Gremlin’s customers are larger organizations. These are companies for whom downtime is of utmost importance, where a site outage that lasts just an hour could cost thousands of dollars. That said, from a cultural perspective, many organizations find it difficult to adopt this sort of mindset. “Proving the value of something that doesn’t happen,” Fornaciari says, is one of the biggest challenges for Gremlin. This is particularly true when selling their tool. Pager pain: How Gremlin sells chaos engineering to customers This is how Gremlin does it: “We have three qualifying questions: do you measure your downtime? Do you have somebody who’s responsible for downtime? And do you actually have a dollar amount tied to it?” Presumably, for many organizations at least one answer to these questions is “no”. That’s why customer support is so important for Gremlin. “Customer success and developer advocacy are two of our biggest initiatives… I’ve told people as we’re recruiting them that half of our goal as a company is to educate people.” Gremlin’s challenges as a product and as a business reflect the wider difficulties of managing upwards. The tension between those ‘on the ground’ and those at a more senior and managerial level is one that Gremlin is acutely aware of. This is where a lot of push back comes from, Fornaciari explains: What we’ve seen so far is just push back from top down - like, why do we need this? We use the term pager pain to define the engineer on call - the closer you are to the ground the closer you are to the on call rotation and the more you feel those pains and the more you believe in this but as you raise up a couple of levels you maybe don’t feel that as much… if you don’t have that measure on uptime - unless someone is on the hook for that at a higher level there’s oftentimes a why do we need this, why are we going to spend money on breaking things. Pager pain is a nice concept - it captures the tension between different layers of management. It highlights the conflict between ‘what do we need?’ and ‘what can we do?’ Read next: Blockchain can solve tech's trust issues  Safety, simplicity and security To successfully sell Gremlin, the way the product is designed is everything. For that reason, the Gremlin team have three tenets built into their product: safety, security, simplicity. When you’ve got a “potentially dangerous tool,” as Fornaciari himself describes it, making sure things are safe and secure is absolutely essential. Arguably, the fact that chaos engineering is so hard to do well might be something that Gremlin can use to its advantage. “One thing we hear when we talk to companies about it is ‘well we’ll go build this ourselves’ and the fact is it’s a really hard thing to do, and a hard thing to do well.” Gremlin is walking on a bit of a tightrope. On the one hand chaos engineering is for everyone, but on the other it’s difficult and dangerous. It should be accessible, but not too accessible. “One of the reasons we don’t have a free offering is because we are a little worried about protecting our customers not doing any harm to people… I mean, this is essentially giving somebody a potentially dangerous tool.. If they’re not given the proper education then that could be a problem, right?” Gremlin aren’t the only chaos engineering product out there. As with any trend, there are plenty of software platforms and tools emerging for technologically forward thinking businesses. Fornaciari doesn’t see these as a threat - he’s confident, bullish even, about Gremlin’s place in the market. “There are a lot of tools out there that people can go and use but they really lack the safety and simplicity.” Alongside its philosophy of safety, security and simplicity, a big selling point, according to Fornaciari, is the experience and expertise that is built into Gremlin’s DNA. “We’ve got fifteen years of combined expertise in this space” he says. “Being the experts on it and having built it 3 or 4 times already in different big companies, it sort of gave us this leg up to go out there in the world.” But while Fornaciari is eager to assert Gremlin’s knowledge, there’s no trace of elitism - sharing knowledge is a core part of the product offering. “We actually built out customer success tooling so we can see if particular attacks fail for them we can actually proactively reach out and be like ‘hey we saw you were trying to do this, maybe you meant to do this’” Fornaciari explains. Controlled chaos: chaos engineering and the scientific method Control is central to Gremlin’s philosophy - it’s a combination of the team’s commitment to safety, security and simplicity. In fact, this element of control that distinguishes chaos engineering today, from what went before. Central to Gremlin’s mission to make chaos engineering accessible, is also redefining how it’s done. “If you’re familiar with the netflix chaos monkey mentality of randomly terminating services, well that’s a good start, but safety is really lacking. We talked more about this controlled chaos… this idea that you start fairly small with this small blast radius and then as you become more confident you grow it out and grow it out as opposed to just like ‘cool, let’s just chuck a grenade in here and see what happens.’” Fornaciari goes on to describe this ‘controlled chaos’ in a surprising way. “It’s much more like the scientific method actually. Applying that method to your infrastructure and your reliability in general.” This approach is essential if you’re going to do chaos engineering well. How to do chaos engineering effectively When I ask Fornaciari how engineering teams and businesses can do chaos engineering well he emphasizes the importance of starting with a hypothesis: “You need to have a hypothesis that you’re trying to prove.Throwing random chaos at something is fine - it’ll sort of surface some of the unknown unknowns for you. But really having a hypothesis that you’re trying to prove is the best way to get value out of this [chaos engineering].” If you’re going to take a scientific approach to testing your infrastructure using ‘chaos experiments’, managing scale is also incredibly important. Don’t run before you can walk is the message. “Keep it very small initially, then you start to grow the blast radius. You definitely want to make sure that you’re starting off with the smallest modicum that you can.” Given the potential dangers of throwing metaphorical gremlins into your system, starting where your comfortable makes a lot of sense. “Start in staging, start where your comfortable, build your confidence. Make sure your system behaves well in front of non-customer facing traffic before you go out to the world.” That said, Gremlin have had “some pretty bold customers” who go straight ahead and start running chaos experiments in production. “That was cool. It’s a little scary, but they were confident and they’ve been using Gremlin as part of their system ever since.” Chaos engineering requires confidence and control Ultimately, if chaos engineering is going to take off - as Fornaciari believes it will - engineers will need to be incredibly confident. That’s true on a number of levels. You need confidence that you’ll be able to handle a range of experiments and deploy them wisely. But you’ll also need confidence that you can manage the expectations of those in senior management. It’s not hard to see the value of chaos engineering. As Fornaciari says “if you prevent one outage one time, you’ve saved that money to pay for the tool to make sure it doesn’t happen again.” But it might be hard to find time for it. It might be hard to get buy in and investment in the tools you need to do it. Gremlin are certainly going to play an important part in helping engineers do that. But one of its biggest challenges - and perhaps one of its most noble missions too - is transforming a culture where people don’t really appreciate ‘pager pain’. If Fornaciari and Gremlin can help solve that, good luck to them. You can follow Matt Fornaciari on Twitter: @callmeforni
Read more
  • 0
  • 0
  • 6291

article-image-blockchain-can-solve-tech-trust-issues-imran-bashir
Richard Gall
05 Jun 2018
4 min read
Save for later

Blockchain can solve tech's trust issues - Imran Bashir

Richard Gall
05 Jun 2018
4 min read
The hype around blockchain has now reached fever pitch. Now the Bitcoin bubble has all but burst, it would seem that the tech world - and beyond - is starting to think more creatively about how blockchain can be applied. We've started to see blockchain being applied in a huge range of areas; that's likely to grow over the next year or so. We certainly weren't surprised to see blockchain rated highly by many developers working in a variety of fields in this year's Skill Up survey. Around 70% of all respondents believe that blockchain is going to prove to be revolutionary. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. To help us make sense of the global enthusiasm and hype for blockchain, we spoke to blockchain expert Imran Bashir. Imran is the author of Mastering blockchain, so we thought he could offer some useful insights into where blockchain is going next. He didn't disappoint. Respondents to the Skill Up survey said that blockchain would be revolutionary. Do you agree? Why? I agree. The fundamental issue that blockchain solves is that of trust. It enables two or more mutually distrusting parties to transact with each other without the need of establishing trust and a trusted third party. This phenomenon alone is enough to start a revolution. Generally, we perform transactions in a centralised and trusted environment, which is a norm and works reasonably well but think about a system where you do not need trust or a central trusted third party to do business. This paradigm fundamentally changes the way we conduct business and results in significant improvements such as cost saving, security and transparency. Why should developers learn blockchain? Do you think blockchain technology is something the average developer should be learning? Why? Any developer should learn blockchain technology because in the next year or so there will be a high demand for skilled blockchain developers/engineers. Even now there are many unfilled jobs, it is said that there are 14 jobs open for every blockchain developer. The future will be built on blockchain; every developer/technologist should strive to learn it. What most excites you about blockchain technology? It is the concept of decentralisation and its application in almost every industry ranging from finance and government to medical and law. We will see applications of this technology everywhere. It will change our lives; just the way Internet did in the 1990s. Also, smart contracts constitute a significant part of blockchain technology, and it allows you to implement Contracts that are automatically executable an enforceable. This ability of blockchain allows you drastically reduce the amount of time it takes for contract enforcement and eliminates the need for third parties and manual processes that can take a long time to come into action. Enforcement in the real world takes a long time, in blockchain world, it is reduced to few minutes, if not seconds, depending on the application and requirements. What tools do you need to learn to take advantage of blockchain? What tools do you think are essential to master in order to take advantage of blockchain? Currently, I think there are some options available. blockchain platforms such as Ethereum and Hyperledger fabric are the most commonly used for development. As such, developers should focus on at least one of these platforms. It is best to start with necessary tools and features available in a blockchain, and once you have mastered the concepts, you can move to using frameworks and APIs, which will ease the development and deployment of decentralised applications. What do you think will be the most important thing for developers to learn in the next 12 months? Learn blockchain technology and at least one related platform. Also explore how to implement business solutions using blockchain which results in bringing about benefits of blockchain such as security, cost-saving and transparency. Thanks for taking the time to talk to us Imran! You can find Imran's book on the Packt store.
Read more
  • 0
  • 0
  • 6140

article-image-python-experts-talk-python-twitter-qa-recap
Richard Gall
29 Mar 2018
3 min read
Save for later

Python experts talk Python on Twitter: Q&A Recap

Richard Gall
29 Mar 2018
3 min read
To celebrate the launch of Python Interviews, we ran a Q&A session on Twitter with some of the key contributors to the book. Author and interviewer Mike Driscoll (@driscollis), and experienced Python contributors Steve Holden (@holdenweb), and Alex Martelli (@aleaxit) got together to respond to your questions. Here's what happened... https://twitter.com/PacktPub/status/979055321959358465 https://twitter.com/aleaxit/status/979055993874104321 https://twitter.com/holdenweb/status/979056136199593984 https://twitter.com/driscollis/status/979056963987361793 The future of Python We then asked Mike, Steve and Alex what they thought the future of Python is going to look like. https://twitter.com/aleaxit/status/979057847660003328 https://twitter.com/holdenweb/status/979059669699309569 https://twitter.com/holdenweb/status/979059813459034112 https://twitter.com/driscollis/status/979059276017815554 How to get involved with the Python community We then asked what our experts think is the best way for someone new to the Python community to get involved. With the language growing at an immense rate, more people are (hopefully) going to want to contribute to the project. https://twitter.com/aleaxit/status/979059707389231105 https://twitter.com/holdenweb/status/979060708276137985 Advice for anyone new to programming Programmings popularity as a career choice is growing. That's not just true of new graduates but people looking to retrain and take on a new challenge in their career. But what should anyone new to programming know when starting out? https://twitter.com/aleaxit/status/979063034202107905 https://twitter.com/holdenweb/status/979061878554054658 https://twitter.com/driscollis/status/979061529575346177 Switching from Python 2.7 to Python 3 There's been considerable discussion within the community on the merits of shifting from Python 2.7 to Python 3. But whatever the obvious advantages are, there will always be resistance to change when it requires an investment of time and effort. And if you don't need to switch then why would you? Here's what Mike, Steve and Alex had to say... https://twitter.com/aleaxit/status/979063346665107457 https://twitter.com/holdenweb/status/979062974450192384 https://twitter.com/driscollis/status/979062547935571969 What gives Python an advantage over other programming languages? Why is Python so popular exactly? If it's growing at such a fantastic rate, why are developers and engineers turning to it? What does it have that other languages don't? https://twitter.com/aleaxit/status/979063792276471808 https://twitter.com/holdenweb/status/979064210608001025 https://twitter.com/driscollis/status/979063896173699072 Future Python releases If Python's going to remain popular, it's going to need to adapt and evolve with the needs of the developers of the future. So what capabilities and features would our experts like to see from Python in the future? https://twitter.com/driscollis/status/979064329864695813 https://twitter.com/aleaxit/status/979064880757063680 https://twitter.com/holdenweb/status/979064474496913408 What problems does Python face as a language? https://twitter.com/driscollis/status/979065953949552640 https://twitter.com/aleaxit/status/979065864539357184 https://twitter.com/holdenweb/status/979066065706725376 Why is Python so useful for AI and Machine learning? AI is a growing area that has expanded beyond the confines of data science into just about every corner of modern software engineering. Python has been a core part of this, and in part it explains part of the rise of Python's popularity - people want to build algorithms in a way that's relatively straightforward. https://twitter.com/driscollis/status/979066778771914752 https://twitter.com/holdenweb/status/979069094862389253 https://twitter.com/holdenweb/status/979069100831006721
Read more
  • 0
  • 0
  • 6119
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deep-meta-reinforcement-learning-will-be-the-future-of-ai-where-we-will-be-so-close-to-achieving-artificial-general-intelligence-agi-sudharsan-ravichandiran
Sunith Shetty
13 Sep 2018
9 min read
Save for later

“Deep meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI)”, Sudharsan Ravichandiran

Sunith Shetty
13 Sep 2018
9 min read
Mckinsey report predicts that artificial intelligence techniques including deep learning and reinforcement learning have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. Reinforcement learning (RL) is an increasingly popular technique for enterprises that deal with large complex problem spaces. It enables the agents to learn from their own actions and experiences. When working in an interactive environment, they use a trial and error process to find the best-optimized result. Reinforcement learning is at the cutting-edge right now and it's finally reached a point that it can be applied to real-world industrial systems. We recently interviewed Sudharsan Ravichandiran, a data scientist at param.ai, and the author of the book, Hands-On Reinforcement Learning with Python. Sudharsan takes us on an insightful journey explaining to us why reinforcement learning is trending and becoming so popular lately. He talks about the positive contributions of RL in various research fields such as gaming industry, robotics, inventory management, manufacturing, and finance. Author’s Bio Sudharsan Ravichandiran, author of the book, Hands-On Reinforcement Learning with Python is a data scientist, researcher, and YouTuber. His area of research focuses on practical implementations of deep learning and reinforcement learning, which includes natural language processing and computer vision. He used to be a freelance web developer and designer and has designed award-winning websites. He is an open source contributor and loves answering questions on Stack Overflow. You can follow his open source contributions on GitHub. Key Takeaways Reinforcement learning adoption among the community has increased exponentially because of the augmentation of reinforcement learning with state of the art deep learning algorithms. It is extensively used in the Gaming industry, robotics, Inventory management, and Finance. You can see more and more research papers and applications leading to full-fledged self-learning agents. One of the common challenges faced in RL is safe exploration. To avoid this problem, one can use imitation learning (learning from human demonstration) to provide the best-optimized solution. Deep meta reinforcement learning will be the future of artificial intelligence where we will implement artificial general intelligence (AGI) to build a single model to master a wide variety of tasks. Thus each model will be capable to perform a wide range of complex tasks. Sudharsan suggests the readers should learn to code the algorithms from scratch instead of using libraries. It will help them understand and implement complex concepts in their research work or projects far better. Full Interview Reinforcement learning is at the cutting-edge right now, with many of the world’s best researchers working on improving the core algorithms. What do you think is the reason behind RL success and why RL is getting so popular lately? Reinforcement learning has been around for many years, the reason it is so popular right now is because it is possible to augment reinforcement learning with state of the art deep learning algorithms. With deep reinforcement learning, researchers have obtained better results. Specifically, reinforcement learning started to grow on a massive scale after the reinforcement learning agent, AlphaGo, won over the world champion in a board game called AlphaGo. Also, Deep reinforcement learning algorithms help us in taking a closer step towards artificial general intelligence which is the true AI. Reinforcement learning is a pretty complex topic to wrap your head around, what got you into RL field? What keeps you motivated to keep on working on these complex research problems? I used to be a freelance web developer during my university days. I had a paper called Artificial Intelligence on my Spring semester, it really got me intrigued and made me want to explore more about the field. Later on, I got invited to Microsoft data science conference where I met many experts and learned more about the field way better. All these got me intriguing and made me to venture into AI. The one thing which motivates and keeps me excited are the advancements happening in the field of reinforcement learning lately. DeepMind and OpenAI are doing a great job and massively contributing to the RL community. Recent advancements like human-like robot hand control to manipulate physical objects with unprecedented dexterity, imagination augmented agents which can imagine and makes decisions, world models where the agents have the ability to dream excite me and keeps me going. Can you please list down 3 popular problem areas where RL is majorly used? Also, what are the 3 most current challenges faced while implementing RL in real-life? As a developer/researcher how you are gearing up to solve them? RL is predominantly used in the Gaming industry, robotics, and Inventory management. There are several challenges in Reinforcement learning. For instance, safe exploration. Reinforcement learning is basically a trial and error process where agents try several actions to find the best and optimal action. Consider an agent learning to navigate/learning to drive a car. Agents don't know which action is better unless they try them. The agent also has to be careful in not selecting actions which are harmful to others or itself, say, for example, colliding with other vehicles. To avoid this problem, we can use imitation learning or learning from a human demonstration where the agents learn directly from the human supervisor. Apart from these, there are various evolutionary strategies used to solve the challenges faced in RL. There are few positive developments in RL happened from Open AI and DeepMind team that have got widely adopted both in research and in real-world applications. What are some cutting-edge techniques you foresee getting public attention in RL in 2018 and in the near future?   Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence. Gaming and robotics or simulations are the two popular domains where reinforcement learning is extensively used. In what other domains does RL find important use cases and how? Manufacturing In manufacturing, intelligent robots are used to place objects in the right position. If it fails or succeeds in placing the object in the right position, it remembers the action and trains itself to do this with greater accuracy. The use of intelligent agents will reduce labor costs and result in better performance. Inventory management RL is extensively used in inventory management, which is a crucial business activity. Some of these activities include supply chain management, demand forecasting, and handling several warehouse operations (such as placing products in warehouses for managing space efficiently). Infrastructure management RL is also used in infrastructure management. For an instance, Google researchers in DeepMind have developed RL algorithms for efficiently reducing the energy consumption in their own data center. Finance RL is widely used in financial portfolio management, which is the process of constant redistribution of a fund into different financial products and also in predicting and trading in commercial transactions markets. JP Morgan has successfully used RL to provide better trade execution results for large orders. Your recently published ‘Hands-On Reinforcement Learning with Python‘ has received a very positive response from the readers. What are some key challenges in learning reinforcement learning and how does your book help them? One of the key challenges in learning reinforcement learning is the lack of intuitive examples and poor understanding of RL fundamentals with required math. The book addresses all the challenges by explaining all the reinforcement learning concepts from scratch and gradually takes readers to advanced concepts by exploring them one at a time. The book also explains all the required math step by step intuitively along with plenty of examples. My intention behind adding multiple examples and code to each chapter was to help the readers understand the concepts better. This will also help them in understanding when to apply a particular algorithm. This book also works as a perfect reference for beginners who are new to reinforcement learning. Are there any prerequisites needed to get the most out of the book? What do you think they should keep in mind while developing their own self-learning agents? Readers who are familiar with machine learning and Python basics can easily follow the book. The book starts with explaining reinforcement learning fundamentals and reinforcement learning algorithms with applications and then it takes the reader in understanding deep learning algorithms followed by the book explaining advanced deep reinforcement learning algorithms. While creating self-learning agents, one should be careful in designing reward and goal functions. What in your opinion are the 3-5 major takeaways from your book? The book serves as a solid go-to place for someone who wants to venture into deep reinforcement learning. The book is completely beginner friendly and takes the readers to the advanced concepts gradually. At the end of the book, the readers can master reinforcement learning, deep learning and deep reinforcement learning along with their applications in TensorFlow and all the required math. Would you like to add anything more to our readers? I would suggest the readers code the algorithms from scratch instead of using libraries, it will help them in understanding the concepts far better. I also would like to thank each and every reader for making this book a huge success. My best wishes to them for their reinforcement learning projects. If you found this interview to be interesting, make sure you check out other insightful articles on reinforcement learning: Top 5 tools for reinforcement learning This self-driving car can drive in its imagination using deep reinforcement learning Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google OpenAI builds reinforcement learning based system giving robots human like dexterity DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help
Read more
  • 0
  • 0
  • 6056

article-image-kali-linux-2018-for-testing-and-maintaining-windows-security-wolf-halton-and-bo-weaver-interview
Guest Contributor
17 Jan 2019
9 min read
Save for later

Kali Linux 2018 for testing and maintaining Windows security - Wolf Halton and Bo Weaver [Interview]

Guest Contributor
17 Jan 2019
9 min read
Microsoft Windows is one of the two most common OSes, and managing its security has spawned the discipline of Windows security. Kali Linux is the premier platform for testing and maintaining Windows security. Kali is built on the Debian distribution of Linux and shares the legendary stability of that OS. This lets you focus on network penetration, password cracking, and using forensics tools, and not the OS. In this interview, we talk to two experts, Wolf Halton and Bo Weaver, on using Kali Linux for pentesting. We also discuss their book Kali Linux 2018: Windows Penetration Testing - Second Edition. Read also: Kali Linux 2018 for testing and maintaining Windows security - Interview with Wolf Halton and Bo Weaver - Part 2 Kali Linux is the premier platform for testing and maintaining Windows security. According to you, what makes it ideal to use? Bo Weaver: First, it runs on Linux and is built on Debian Linux.  Second, the people at Offensive Security do a fantastic job of keeping it updated and stable with the latest tools to support not just pentesting but also forensics work or network analysis and diagnostics.  You can tell that this platform is built and maintained by real security experts and isn’t some distro thrown together by some marketing folks to make a buck. Wolf Halton: Kali is a very stable and extensible open source platform.  Offensive Security’s first security platform, BackTrack, was customised in a non-Posix way, breaking from UNIX or other Linux distros by putting the security tools in unexpected places in the filesystem.  Since Kali was first released, they used Debian Testing as a base, and adhered to the usual file locations. This made Kali Linux far easier to use. The normalization of the OS behind the Kali Linux distro makes it more productivity-friendly than most of the other “Security Distros,” which are usually too self-consciously different. Here, the developers are building their space in the mass of distros by how quirky the interface or how customizable the installation process has to be. Why do you love working with Kali Linux? Bo Weaver: I appreciate it’s stability.  In all the years I have used Kali on a daily basis, I have had only one failure to update properly.  Even with this one failure, I didn’t have any data loss. I run Kali as my “daily driver” on both my personal and company laptop, so one failure in all that time is nothing.  I even do my writing from my Kali machines. Yes I do all my normal computing from a normal user account and NOT root! I don’t have to go looking for a tool. Any tool that I need is either installed or is in the repo.  Since everything comes from the same repo, updates to all my tools and the system is just a simple command to keep everything updated. Wolf Halton: Kali is a stable platform, based upon a major distribution with which I am very familiar.  There are over 400 security tools in the Kali repos, and it can also draw directly from the Debian Testing repos for even more tools.  I always add a few applications on top of the installation default set of packages, but the menus work predictably, allowing me to install what I need without having to create a whole new menu system to get to them. Can you tell the readers about some advantages and disadvantages of using Kali Linux for pentesting? Bo Weaver: I really can’t think of a disadvantage. The biggest advantage is that all these tools are in one toolbox (Kali). I remember a time when building a pentesting machine would take a week, having to go out, and find and build the tools separately.  Most tools had to be manually compiled for the machine. Remember “make”, “make install”? Then to have it bork over a missing library file. In less than an hour, you can have a working pentesting machine running. As mentioned earlier, Kali has the tools to do any security job, not just pentesting, such as pulling evidence from a laptop for legal reasons,  analyzing a network, finding what is breaking your network, breaking into a machine because the passwords are lost. Also, it runs on anything from a high-end workstation to a Raspberry Pi or a USB drive with no problem. Wolf Halton: The biggest disadvantage is for Windows-Centric users who have never used any other operating system.  In our book, we try to ease these users into the exciting world of Linux. The biggest advantage is that the Kali Linux distro is in constant development.  I can be sure that there will be a Kali distro available even if I wander off for a year.  This is a great benefit for people who only use Linux when they want to run an ad hoc penetration test. Can you give us a specific example (real or fictional) of why Kali Linux is the ideal solution to go for? Bo Weaver: There are other distros out there for this use.  Most don’t have the completeness of toolsets. Most security distros are set up to be run from a DVD and only contain a few tools to do a couple of tasks and not all security tasks.  BlackArch Linux is the closest to Kali in comparison. BlackArch is built on Arch Linux which is a bleeding-edge distro which doesn’t have the stability of Debian.  Sometimes Arch will bork on an update due to bleeding-edge buggy code in an update. This is fine in a testing environment but when working in production, you need your system to run at the time of testing.  It’s embarrassing to call the customer and say you lost three hours on a test fixing your machine. I’m not knocking BlackArch. They did a fine job on the build and the toolsets included. I just don’t trust Arch to be stable enough for me.  This is not saying anything bad about Arch Linux. It does have its place in the distro world and does a fine job of filling its place in this world. Some people like bleeding edge, it’s just a personal choice. The great thing about Linux overall is that you have choices.  You’re not locked into one way a system looks or works. Kali comes with five different desktop environments, so you can choose which one is the best for you.  I personally like KDE. Wolf Halton: I have had to find tools for various purposes: Tools to recover data from failed hard-drives, Tools to stress-test hundreds of systems at a time, Tools to check whole networks at a time for vulnerabilities, Tools to check for weak passwords, Tools to perform Phishing tests on email users, Tools to break into Windows machines, security appliances and network devices. Kali Linux is the one platform where I could find multiple tools to perform all of these tasks and many more. Congratulations on your recent book, Kali Linux 2018: Windows Penetration Testing - Second Edition. Can you elaborate on the key takeaways for readers? Bo Weaver: I hope the readers come out with a greater understanding of system and network security and how easy it is to breach a system if simple and proper security rules are not followed.  By following simple no-cost rules like properly updating your systems and proper network segmentation, you can defeat most of the exploits in the book. Over the years, Wolf and I have been asked by a lot of Windows Administrators “How do you do a pentest?”  This person doesn’t want a simple glossed over answer. They are an engineer and understand their systems and how they work; they want a blow by blow description on actually how you broke it, so they can understand the problem and properly fix it.  The book is the perfect solution for them. It contains methods we use in our work on a daily basis, from scanning to post exploitation work. Also, I hope the readers find how easy Linux is to use as a desktop workstation and the advantages in security when using Linux as your workstation OS and do the switch from Windows to the Linux Desktop. I want to thank the readers of our book and hope they walk away with a greater understanding of system security. Wolf Halton: The main thing we tried to do with both the first and second edition of this book is to give a useful engineer-to-engineer overview of the possibilities of using Kali to test one’s own network, and including very specific approaches and methods to prove their network’s security.  We never write fictionalized, unworkable testing scenarios, as we believe our readers want to actually know how to improve their craft and make their networks safer, even though there is no budget for fancy-schmancy proprietary Windows-based security tools that make their non-techie managers feel safer. The world of pentesting is still edgy and interesting, and we try to infuse the book with our own keen interest in testing and developing attack models before the Red-Team hackers get there. Thanks Bo and Wolf for a very insightful perspective into the world of pentesting and on Kali Linux! Readers, if you are looking for help to quickly pentest your system and network using easy-to-follow instructions and support images, Kali Linux 2018: Windows Penetration Testing - Second Edition might just be the book for you. Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Bo Weaver is an old school ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on a R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990's and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint a Atlanta based security consulting company. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 6047

article-image-everybody-can-benefit-odoo-development-an-interview-yenthe-van-ginneken
Sugandha Lahoti
29 Jun 2018
9 min read
Save for later

“Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” - An interview with Odoo community hero, Yenthe Van Ginneken

Sugandha Lahoti
29 Jun 2018
9 min read
Odoo is one of the fastest growing open source, business application development software products available. It comes with: Powerful GUI, Performance optimization, Integrated in-app purchase features Fast-growing community to transform and modernize businesses We recently interviewed Yenthe Van Ginneken, an Odoo developer, highly active in the Odoo community and recipient of Odoo best contributor of the year 2016 and Odoo community hero 2017. He spoke to us about his journey with Odoo, his thoughts on Odoo’s past, present and future, and the Odoo community. Expert's Bio Yenthe Van Ginneken, currently the technical team leader at Odoo Experts, has been an Odoo developer for over four years. He has won two awards, “Best contributor of the year 2016” and the “Odoo community hero” award in 2017. He loves improving software and teaching other people the best practices for Odoo development on his blog. You can read his Odoo blog, follow him on Twitter or reach out to him on LinkedIn. Key Takeaways Odoo is scalable and flexible to the extent that everyone, from a small startup to a giant tech company can benefit from it. It is ahead of quite a lot of ERP systems with its clean UI, advanced modules integration and the flexibility of its technical framework. Python is the preferred language of choice among most developers that want to use the Odoo framework, especially for automating and scaling tasks. The Odoo community is diverse and vast. By contributing and regularly interacting with other members, you will gain deeper insights into many different aspects of Odoo development. A great way to learn to develop in Odoo and quickly grow is actually by helping in the community. Odoo 12 will reportedly improve data processing, better report insights, and support for OCR (Optical Character Recognition) for handling documents among other exciting updates. Full Interview On who should use Odoo Odoo is more than an ERP tool. According to you, What is Odoo? Who will benefit from adopting Odoo? What made you choose Odoo?   For me, Odoo is more than an ERP. Odoo literally allows me to make any module or functionality that I can think of. Since Odoo is so flexible and scalable I believe that almost everybody can benefit from adopting Odoo. Whether you’re a small start-up or a giant tech company. The most important part to be able to benefit from adopting Odoo is adjusting the processes and mindset to use Odoo, not adjusting Odoo for the company. The projects that work the best and have the best benefit are those that don’t over-engineer and try to focus on the main company processes. I personally chose Odoo after I got an opportunity to become an Odoo developer at a company in Belgium. After the job offer I visited Odoo.com and saw the massive amount of functionalities in Odoo (while being free!) and I was genuinely amazed. After looking at the technical framework and all the default options provided by the framework I was sure that I would love to develop and implement in Odoo. Since that day I never stopped working with Odoo. On journey from OpenERP to Odoo Odoo started off as OpenERP and then in 2014, it moved beyond just ERP and was renamed Odoo. How has Odoo’s journey been so far since then? What do you think are the key milestones achieved by Odoo till date? Since the renaming from OpenERP to Odoo the company has seen a rapid growth. A bit after changing the name Odoo also introduced the enterprise version which was, in my opinion, the turning point for Odoo S.A. It allowed Odoo to keep its open source strength and market share while also gathering funds to fund the ongoing growth of the product. The big investments that are being made in the Research and Development team allow them to keep improving year after year. The main strengths and key milestones from Odoo are absolutely its flexibility, a great framework and the fact that most of the possibilities are already in Odoo by default. On the drive behind contributing to the Odoo community You are highly active in the Odoo community. How did you get into contributing for Odoo? How has this experience improved you as a developer? According to you, what are the key challenges the Odoo community is facing currently? My very first contribution started in the second half of 2014 and weren’t very significant at first. I noticed that Odoo 8, at that point the newest version, was not very well translated and had a lot of inconsistency so I started translating it in Dutch. From there on I noticed that it could have had quite a big impact and in fact could improve the ERP. It didn’t take long before I started contributing in other ways. Reporting issues, fixing bugs, maintaining bug reports and helping other people on the official help forums. By contributing to all these different subjects I got introduced to more domains and gained more insights. Thanks to my involvement with the community, I’ve learned that there is more than one side to developing and implementing projects. I believe it made me a better programmer and made me think a lot more about ways to code custom development for projects. Without being active in a community and contributing you’ll be blindsided by your own perspective. It is a great way to get challenged and you’ll see more cases by being active in the community than you could ever see on your own. The Odoo community faces a few challenges at this point. It is difficult to maintain the right balance between the enterprise version and community (free) version. There are not a lot of very active contributors to the official Odoo code and Odoo is behind on handling fixes/bug reports made by community members. This results in some community members not feeling appreciated or heard. Hiring a second community manager might be a good way to resolve these issues though. The most difficult challenge for both Odoo and the Odoo community is to make everybody feel heard and give every person the ability to contribute in the way he or she can. When there is enough help from Odoo and the community feels supported there is a possibility for a great and thriving community. On how to learn Odoo effectively As a person who has a strong hold over Odoo development, what is the typical learning curve for someone getting into Odoo, as a consultant? What is the best way to start developing in Odoo? What should one watch out for while learning? The learning curve can be quite long and can have its challenges. Usually, if you don’t have any experience with Odoo and only know basic Python it’ll take about six months before you really get to know the ins and outs of Odoo. The best way to learn to develop Odoo is probably the same as with most things in technology: dive in! Make sure you get the basics right and understand how the main functionalities work before going deeper. A great way to learn to develop in Odoo and to quickly grow is actually by helping in the community. You can get insight and help from experienced developers while also contributing to the community, it’s a win-win. Start small and build your way up to the details. It is important to find good documentation and tutorials though. At the moment there are still quite some blog posts and tutorials that are from quite a low quality. Because of this I actually started writing my own tutorials, which explain concepts step by step with samples. You can find it at https://odoo.yenthevg.com Editor’s note: Check out our collection of Odoo Books and Videos to master Odoo development. On the upcoming Odoo 12 release Odoo 12 is expected to be released later this year. What’s got you excited about this new release? Quite a lot! Every release has loads of new features that are announced and it’s an exciting time, every time. The introduction of a report designer for functional people is one of the best (known) new features. The improved reporting tools for data insight will become a great improvement too. The biggest announcements are made at Odoo Experience in October and are not publicly available yet so we’ll have to wait for that. On the future of ERP There is a lot happening in the area of ERP and BI: self-service analytics, real-time analytics, agile BI development etc. Where do you foresee the ERP market headed? We've seen ERP/CRM systems getting powerful inbuilt analytics systems, what do you think is next for the industry? What is Odoo’s role here? As with any sector in IT, a lot is becoming very data-driven. In the future integration and usage of data will only grow. I expect the combination of BI and AI to become a powerful way to process and handle data on unseen scales. Odoo itself has already hinted at improved data processing, better report insights and support for OCR (Optical Character Recognition) for handling documents. Odoo has been ahead of quite a lot of ERP systems with its clean UI, advanced modules integration and the flexibility of its technical framework for years. I expect Odoo will also be leading the way for handling all this data and getting important statistics out of it. I’m quite sure it is only a matter of time before Odoo starts working on even better BI reporting and tools. On Python and automation Automation is everywhere today and becoming an integral part of organizations and processes. Python and automation have gone hand in hand since Python’s early days. Today Python is one of the top programming languages. How do you see Python’s evolution over the years in the area of automation? What are the top ways you use Python for automation, today? It is for a reason that Python is so popular. It is flexible, quite quick to program with and the options are virtually endless. In the next years, Python will only become more popular and this will also be the case for automation projects made with Python. I personally use the Odoo framework with Python as a backbone for nearly everything that I automate (and in fact also for non-automated tasks). The projects vary from automatically handling stock moves to automatically updating remote instances to automatically getting full diagnostic reports. The combination of the programming language and the framework from Odoo allows me to automate tasks and deploy them on a big scale. ERP tool in focus: Odoo 11 How to Scaffold a New module in Odoo 11 A step by step guide to creating Odoo Addon Modules
Read more
  • 0
  • 0
  • 5861

article-image-with-python-you-can-create-self-explanatory-concise-and-engaging-data-visuals-and-present-insights-that-impact-your-business-tim-grosmann-and-mario-dobler-interview
Savia Lobo
10 Nov 2018
6 min read
Save for later

“With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business” - Tim Großmann and Mario Döbler [Interview]

Savia Lobo
10 Nov 2018
6 min read
Data today is the world’s most important resource. However, without properly visualizing your data to discover meaningful insights, it’s useless. Creating visualizations helps in getting a clearer and concise view of the data, making it more tangible for (non-technical) audiences. To further illustrate this, below are questions aimed at giving you an idea why data visualization is so important and why Python should be your choice. In a recent interview, Tim Großmann and Mario Döbler, the authors of the course titled, ‘Data Visualization with Python’, shared with us the importance of Data visualization and why Python is the best fit to carry out Data Visualization. Key Takeaways Data visualization is a great way, and sometimes the only way, to make sense of the constantly growing mountain of data being generated today. With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business. Your data visualizations will make information more tangible for the stakeholders while telling them an interesting story. Visualizations are a great tool to transfer your understanding of the data to a less technical co-worker. This builds a faster and better understanding of data. Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data, combined with the number of available libraries makes Python the best choice. Full Interview Why is Data Visualization important? What problem is it solving? As the amount of data grows, the need for developers with knowledge of data analytics and especially data visualization spikes. In recent years we have experienced an exponential growth of data. Currently, the amount of data doubles every two years. For example, more than eight thousand tweets are sent per second; and more than eight hundred photos are uploaded to Instagram per second. To cope with the large amounts of data, visualization is essential to make it more accessible and understandable. Everyone has heard of the saying that a picture is worth a thousand words. Humans process visual data better and faster than any other type of data. Another important point is that data is not necessarily the same as information. Often people aren’t interested in the data, but in some information hidden in the data. Data visualization is a great tool to discover the hidden patterns and reveal the relevant information. It bridges the gap between quantitative data and human reasoning, or in other words, visualization turns data into meaningful information. What other similar solutions or tools are out there? Why is Python better? Data visualizations can be created in many ways using many different tools. MATLAB and R are two of the available languages that are heavily used in the field of data science and data visualization. There are also some non-coding tools like Tableau which are used to quickly create some basic visualizations. However, Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data, combined with the number of available libraries makes Python the best choice. In addition to all the mentioned perks, Python has an incredibly big ecosystem with thousands of active developers. Python really differs in a way that allows users to also build their own small additions to the tools they use, if necessary. There are examples of pretty much everything online for you to use, modify, and learn from. How can Data Visualization help developers? Give specific examples of how it can solve a problem. Working with, and especially understanding, large amounts of data can be a hard task. Without visualizations, this might even be impossible for some datasets. Especially if you need to transfer your understanding of the data to a less technical co-worker, visualizations are a great tool for a faster and better understanding of data. In general, looking at your data visualized often speaks more than a thousand words. Imagine getting a dataset which only consists of numerical columns. Getting some good insights into this data without visualizations is impossible. However, even with some simple plots, you can often improve your understanding of even the most difficult datasets. Think back to the last time you had to give a presentation about your findings and all you had was a table with numerical values in it. You understood it, but your colleagues sat there and scratched their heads. Instead had you created some simple visualizations, you would have impressed the entire team with your results. What are some best practices for learning/using Data Visualization with Python? Some of the best practices you should keep in mind while visualizing data with Python are: Start looking and experimenting with examples Start from scratch and build on it Make full use of documentation Use every opportunity you have with data to visualize it To know more about the best practices in detail, read our detailed notes on 4 tips for learning Data Visualization with Python. What are some myths/misconceptions surrounding Data Visualization with Python? Data visualizations are just for data scientists Its technologies are difficult to learn Data visualization isn’t needed for data insights Data visualization takes a lot of time Read about these myths in detail in our article, ‘Python Data Visualization myths you should know about’. Data visualization in combination with Python is an essential skill when working with data. When properly utilized, it is a powerful combination that not only enables you to get better insights into your data but also gives you the tool to communicate results better. Data nowadays is everywhere so developers of every discipline should be able to work with it and understand it. About the authors Tim Großmann Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning, using state-of-the-art algorithms to develop cutting-edge products. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. Setting up Apache Druid in Hadoop for Data visualizations [Tutorial] 8 ways to improve your data visualizations Getting started with Data Visualization in Tableau  
Read more
  • 0
  • 0
  • 5801
article-image-imran-bashir-on-the-fundamentals-of-blockchain-its-myths-and-an-ideal-path-for-beginners
Expert Network
15 Feb 2021
5 min read
Save for later

Imran Bashir on the Fundamentals of Blockchain, its Myths, and an Ideal Path for Beginners

Expert Network
15 Feb 2021
5 min read
With the invention of Bitcoin in 2008, the world was introduced to a new concept, Blockchain, which revolutionized the whole of society. It was something that promised to have an impact upon every industry. This new concept is the underlying technology that underpins Bitcoin.  Blockchain technology is the backbone of cryptocurrencies, and it has applications in finance, government, media, and many other industries.   Some describe blockchain as a revolution, whereas another school of thought believes that it is going to be more evolutionary, and it will take many years before any practical benefits of blockchain reach fruition. This thinking is correct to some extent, but, in Imran Bashir’s opinion, the revolution has already begun. It is a technology that has an impact on current technologies too and possesses the ability to change them at a fundamental level.  Let’s hear from Imran on fundamentals of blockchain technology, its myths and his recent book, Mastering Blockchain, Third Edition. What is blockchain technology? How would you describe it to a beginner in the field? Blockchain is a distributed ledger which runs on a decentralized peer to peer network. First introduced with Bitcoin as a mechanism that ensures security of the electronic cash system, blockchain has now become a prime area of research with many applications in a variety of industries and sectors.   What should be the starting point for someone aiming to begin their journey in Blockchain? Focus on the underlying principles and core concepts such as distributed systems, consensus, cryptography, and development using no helper tools in the start. Once you understand the basics and the underlying mechanics, then you can use tools such as truffle or some other framework to make your developer life easier, however it is extremely important to learn the underlying concepts first.   What is the biggest myth about blockchain? Sometimes people believe that blockchain IS cryptocurrency, however that is not the case. Blockchain is the underlying technology behind cryptocurrencies that ensures the security, and integrity of the system and prevents double spends. However, cryptocurrency can be considered one application of blockchain technology out of many.      “Blockchain is one of the most disruptive emerging technologies today.” How much do you agree with this? Indeed, it is true.  Blockchain is changing the way we do business. In the next 5 years or so, financial systems, government systems and other major sectors will all have blockchain integrated in one way or another.   What are the factors driving development of the mainstream adoption of Blockchain? The development of standards, interoperability efforts, and consortium blockchain are all contributing towards mainstream adoption of blockchain. Also demand for more security, transparency, and decentralization in some sectors are also key drivers behind more adoption, e.g., a prime solution for decentralized sovereign identity is blockchain.   How do you explain the term bitcoin mining? Mining is a colloquial term used to describe the process of creating new bitcoins where a miner repeatedly tries to find a solution to a math puzzle and whoever finds it first wins the right to create new block and earn bitcoins as a reward.    How can Blockchain protect the Global economy? I think with the trust, transparency and security guarantees provided by blockchain we can perceive a future where financial crime can be limited to a great degree. That can have a good impact on the global economy. Furthermore, the development of CDBCs (central bank digital currencies) are expected to have a major impact on the economy and help to stabilize it. From an inclusion point of view, blockchain can allow unbanked populations to play a role in the global financial system. If cryptocurrencies replace the current monetary system, then because of the decentralized nature of blockchain, major cost savings can be achieved as no intermediaries or banks will be required, and a peer to peer, extremely low cost, global financial system can emerge which can transform the world economy. The entire remittance ecosystem can evolve into an extremely low cost, secure, real-time system which can include people who were porously unbanked. The possibilities are endless.   Tell us a bit about your book, Mastering Blockchain, Third Edition? Mastering Blockchain, Third Edition is a unique combination of theory and practice. Not only does it provides a holistic view of most areas of blockchain technology, it also covers hands on exercises using Ethereum, Bitcoin, Quroum and Hyperledger to equip readers with both theory and practical knowledge of blockchain technology. The third edition includes four new chapters on hot topics such as blockchain consensus, tokenization, Ethereum 2 and Enterprise blockchains.  About the author  Imran Bashir has an M.Sc. in Information Security from Royal Holloway, University of London, and has a background in software development, solution architecture, infrastructure management, and IT service management. He is also a member of the Institute of Electrical and Electronics Engineers (IEEE) and the British Computer Society (BCS). Imran has extensive experience in both the public and financial sectors, having worked on large-scale IT projects in the public sector before moving to the financial services industry. Since then, he has worked in various technical roles for different financial companies in Europe's financial capital, London. 
Read more
  • 0
  • 0
  • 5597

article-image-developers-dont-belong-on-a-pedestal-theyre-doing-a-job-like-everyone-else-april-wensel-on-toxic-tech-culture-and-compassionate-coding-interview
Richard Gall
02 Jul 2019
15 min read
Save for later

"Developers don't belong on a pedestal, they're doing a job like everyone else" - April Wensel on toxic tech culture and Compassionate Coding [Interview]

Richard Gall
02 Jul 2019
15 min read
It’s well known that there’s a toxic element to tech culture. And although it isn’t new, it has nevertheless surfaced and become more visible thanks to the increasing maturity of the platforms that are today shaping public discourse. As those platforms empower new voices to speak and allow new communities to organize, the very fabric of the culture on which many of them were built - hyper-masculine, competitive, and with scant disregard for the wider implications of their decisions on users - becomes the target of critique. But while everything from sexual harassment cover-ups to content moderation crises signal deep rooted issues inside the tech industry, substantially transforming tech’s cultural problems is a problem that’s more difficult to solve. It’s also one that many leading organizations and individuals seem to be unwilling to properly engage with. This is where April Wensel comes in. She’s made it her mission to help tackle issues of toxicity and ultimately transform tech culture with her organization, Compassionate Coding. What is Compassionate Coding? Compassionate Coding was launched in 2016 as a “response to a lot of the problems I saw in the tech industry with culture,” Wensel tells me when we spoke recently over Skype. “The common thread,” she explains, “was a lack of concern for human beings that are involved in technology or affected by technology.” This is particularly significant for Wensel. While it might be tempting to see the Google Walkout, the Cambridge Analytica scandal, and the controversy around Rekognition as nothing more than a collection of troubling but ultimately unrelated issues, it’s vital that we understand them together. [caption id="attachment_28750" align="alignright" width="300"] via compassionatecoding.com[/caption] “For things to really change - we can’t approach each issue as one problem,” Wensel says. “They really have the same root problem, which is this lack of compassion.” Compassion is an important and very deliberate word. It wasn’t chosen purely for its alliterative impact. “I chose compassion because I see compassion as a really rational thing; not just an intangible thing.” Compassion is, Wensel continues, “a more active form of empathy. Empathy allows you to feel what others are feeling, compassion allows you to see suffering, and - the important piece - to want to alleviate suffering.” Compassion as an antidote to toxic tech culture To talk about compassion in the tech industry is provocative. She tells me she recalls someone on Reddit describing the idea of compassionate coding as ‘girly '. But she tries to “tune out” online resistance, adopting a measured attitude: “whenever you have new, challenging ideas people get defensive.” Even if people aren’t aggressively opposed to her ideas, initially there was a distinct unwillingness to really engage with the ideas she was putting forward. “I… saw that it wasn’t cool to talk about these things. If you started talking about humans or whatnot, people are like oh, you must be a designer or you must be in product… No, I’m a developer. I just care about the people we’re impacting.” Crucial to this attitude is Wensel’s point that compassionate coding is something that can have real effects at every level. She describes it as “a new way of weighing decisions on a daily basis… it goes from high level things like what are we building? to low level things - what should I name this variable to make it easier for somebody in the future to understand?” Distributing power through diversity The context into which Compassionate Coding has entered the world is complex. High profile scandals need attention and action, but they are only the tip of the iceberg. They are symptomatic of low-lying problems that often pass unnoticed. Diversity is a good example of this. Although it’s often framed in the somewhat prosaic context of equal opportunities, it’s actually a powerful way of breaking apart privilege and the concentration of power that allows harmful products to be released and discrimination to find its way into organizational practices. By bringing people from a diverse range of backgrounds with different experiences into positions of authority and influence, the decisions that are made at all levels are supported by a greater awareness of context. In effect, decision making becomes more rigorous. Similarly, organizations themselves become safer and more welcoming places for employees from minority backgrounds because networks of support can form, making challenging malpractice or even abuse less of a risk professionally. This is something Wensel is well aware of. She takes umbrage with the concept of ‘diversity of thought’ which she sees as a way to mask a lack of genuine diversity. “A lot of companies claim they have diversity of thought…” she says, “that are all white men.” “You can’t really have true diversity of thought if everybody has come from the same background and hasn’t had any of the challenges that people from minority backgrounds might face.” The barriers to diversity are largely structural problems that can be felt far beyond tech. But according to Wensel, there are nevertheless cultural issues unique to the industry that are compounding the problem: “If you say you value diversity but really one of your values is the efficiency or perceived efficiency that comes when everyone thinks the same way then you have to realise that you’re gonna have to make some concessions in terms of creating a bit of discomfort when people are debating issues… because there is going to be some conflict when you create these diverse spaces.” Put another way, in an industry where you’re expected to move quickly and adapt, where you’re constantly looking for efficiency, diversity is always going to be an issue. It brings friction. For Wensel, the role Compassionate Coding can play in supporting diversity and inclusion is one where it helps to shift the industry mindset away from one that is scared of friction, to one where friction is vital if we’re to build better, safer, and more secure software. She points out that diversity isn’t just an initiative, it must be something that is constantly practiced: “Inclusion has to be a daily practice and so you need somebody who is in a position of power who can help establish inclusive practices,” she says. But it also needs to be something organizations need to invest in: “companies need to be paying people to do this because a lot of times the burden falls on underrepresented groups in the company and that’s not right.” Read next: Github Sponsors: Could corporate strategy eat FOSS culture for dinner? The problem with meritocracy If diversity can help unlock a better way of working in the tech industry, there are still other industry shibboleths that need to be slayed. According to Wensel, one of these is meritocracy. It is, she argues, often used as cover by those that are resistant to genuine diversity. “A lot of time in tech people want to talk about a meritocracy… [Recode co-founder] Kara Swisher says tech is more like a mirrortocracy because the people who succeed look like the ones who are already in the industry.” https://youtu.be/ng4sbQHCGLQ But what makes this problem worse is the fact that tech’s meritocracy is haunted by stereotypes and assumptions about what it means to be a developer. She points to a study done by IBM in the sixties that aimed to find out “what makes a good, strong programmer.” “They found among other things that programmers like puzzles, and they don’t like people… So it created a stereotype of what it means to be a good developer, and part of that was not liking people. And the reason that was so important - even though it was back in the sixties - is that IBM was a very influential company in terms of establishing tech culture,” Wensel says. Stack Overflow’s negative impact on tech culture What has further exacerbated this issue is how influential figures have helped to reinforce these stereotypes, effectively buying into the image of a programmer put forward in IBM’s research. In particular, Wensel calls out Stack Overflow and its founders Joel Spolsky and Jeff Atwood. “If you read through some of their old blogs from the early 2000s,” she says, “you can see a lot of the elements of the toxic culture that I talk about in so much of my work. Things like... hyper-competition… an over focus on aggressive competition… things like zero sum thinking. There’s an elitism - there’s not enough for everybody and some people are better than others.” Wensel suggests the attitudes of Atwood and Spolsky have been instrumental in forming the worst elements of the website “where the focus is not on helping people, but on accumulating points in the game of stack overflow.” Wensel detailed her experiences of Stack Overflow and offered an incisive critique of the website in a post on Medium in 2018. She reveals that although she has used Stack Overflow since its launch in 2008 (the year she graduated from her Computer Science class) “the condescending and blatantly rude responses on the site” have dissuaded her from ever actually creating an account. Although the Compassionate Code founder can see that the site is trying to change things, she believes it can still do a lot more (in her post she adds this response from Stack overflow employee Joe Friend). The problem, however, is that this would be a risk for the company. “They really have to be willing to alienate their audience - the ones who are contributing to the toxic culture.” Ultimately this highlights the problem facing many companies and communities in the tech industry - inclusivity and diversity aren’t things that can simply be integrated into established patterns and beliefs. Those beliefs and values need to change too. Which can, of course be painful. Dismantling the hierarchy of tech skills Again, it’s important to note that Wensel’s criticisms aren’t just on the grounds of civility or accessibility. It’s ultimately bad for the industry as a whole and bad for users. It helps to cultivate an engineering culture where certain skills are overvalued while others are excluded. This has consequences for how we view ourselves in the industry (we're never good enough, and we constantly have to compete), but it also means the sort of work and thought that should go into building and delivering software is viewed as less important. “None of this is productive and none of this is creating value. We need people doing all of these roles, and so which one of these has more prestige shouldn’t be an issue” Wensel argues. “That’s why one of very clear indications that there’s a problem in the culture is the fact that we are obsessed with the need to rank skills... software projects are failing for people reasons. And yet people who are good with people and technology are seen as too soft… they’re put in a box of not being technical.” Wensel argues that we need to stop worrying about who is and who isn’t a developer. “There’s no such thing as a real developer. If you write code you’re a developer... that’s enough… Developers are no better than designers, or product managers, or salespeople… that hierarchy is even more entrenched because it’s often reflected in salaries - so developers get paid disproportionately more than all these other roles.” The myth of scarcity and the tech skills gap What’s more, Wensel believes this hierarchy of programming skills is actually helping to perpetuate the notion of a tech skills gap. She believes the idea that there is a scarcity of “tech talent” is a “myth.” “I think there’s tons of talent in tech that’s being overlooked for reasons of unconscious bias, stereotypes…” she explains. “Once we start to bring in these people to the table who are out there already - very talented, very skilled - it will start to melt away this whole putting developers on a pedestal… developers don’t belong on a pedestal, they’re just doing a job like anybody else.” Wensel believes we will - and need to - move towards a world where programming skills lose their “prestige”. Having Python or React on your CV, for example, should really be no different to saying you know how to use Excel. “As these skills become seen for what they are, which is just something that anybody can learn if they put in the time, then I think that the prestige around them will be reduced.” How Agile is changing what it means to be a developer We’re moving towards a world where the solipsism of the valorization of technical skill becomes outdated thanks to broader industry trends. With DevOps forcing developers to become accountable for the full lifecycle of their code, and distributed systems engineering requiring a holistic awareness of a complex network of dependencies, it’s clear that more sensitivity about how your code is interacting with and impacting users in the real world is more important in software engineering than it ever has. “Over and over again I see both in formal studies and anecdotally… what’s causing software projects to fail or to be delayed... are people problems. Coordination problems, planning problems resourcing, all of that - not purely technical problems,” says Wensel. That said, Wensel nevertheless views Agile as a trend that’s positive for the industry. “A lot of the ideas behind agile software development are really positive in a lot of ways I see it as the first step in bringing emotional intelligence to the software team because you’re asked to consider the end user…” Read next: DevOps Engineering and Full-Stack Development – 2 Sides of the Same Agile Coin However, she also says that software engineering practices and philosophies like Agile only go so far. “The problem is that they [proponents of Agile] didn’t bring in the ethics there. So you can still create a lot of value very efficiently with agile development without considering the long term impact.” Agile is a good context for Wensel to drive her mission forward - but it can’t improve things on it own. Read next: Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] Putting Compassionate Coding into practice It’s clear that Compassionate Coding is needed in today’s software industry. Yes, tech culture’s toxicity is damaging and dangerous for everyone, but it’s also not fit for purpose. It’s stopping us from evolving and building the software people actually need. Think of it this way: it’s stopping us from putting users first at a time when the very idea of the individual feels vulnerable, thanks to a whirlwind of reactionary politics and rampant, unsustainable capitalism. However, it’s important that we actually see Compassionate Coding as something that can be practiced, both by individuals and organizations. The 4 levels of compassionate coding Wensel explained compassionate coding as involving 4 key ‘levels’. These levels turn the concept into something practical, that every individual and team can actually go and do themselves. “It’s how you treat yourself with compassion, how you treat your coworkers, your collaborators with compassion, how you treat your direct users of the software you’re creating… and how you treat the community at large who may or may not be people who use your product,” she says. Wensel is not only continuing to deliver training sessions and keynotes for her clients, but is also writing a book which will make her ideas more accessible. I asked her what advice she would offer individuals and businesses that want to follow her lead now. “The biggest thing people can do,” she says, “is to analyze their own thinking… Do a bit of meta-cognition to understand how do I think? Where do I have biases? At an organizational level, businesses should be “prioritizing talking about these issues, making it safe to talk about these issues, hiring people who understand these issues and can improve your company in these ways” she says. The importance of the individual in tackling tech's toxicity But Wensel still believes in the importance of individuals in enacting change. “It’s humans all the way down and all the way up… Leadership in a company and [the issue of] who makes decisions is just... another set of humans, and so I think changing individuals is really powerful.” Her approach is ultimately one that espouses the values of Compassionate Coding. “You can’t control the outcome but you can control the actions you take. So I have a lot of faith in the change that motivated individuals can make.” If everyone in the industry could adopt that attitude we’d surely be some way towards not better professional lives and better experiences and products for users. Follow April on Twitter: @aprilwensel  Other projects that are making the tech industry better April cited a number of organizations that she believes are doing great and important work across the tech industry: Project Include, an organization that wants to accelerate diversity in the industry. Black Girls Code, which aims to improve the number of women of color in the digital sector. Elephant in the Valley, which is tackling gender disparity in Silicon Valley. Kapor Center, removing barriers for underrepresented groups in tech. Learn more about the issues they're helping to solve, and support them if you can.
Read more
  • 0
  • 0
  • 5403

article-image-minko-gechev-developers-should-learn-all-front-end-frameworks-to-go-to-the-next-level
Richard Gall
04 Jun 2018
6 min read
Save for later

Minko Gechev: "Developers should learn all major front end frameworks to go to the next level"

Richard Gall
04 Jun 2018
6 min read
This year's Skill Up survey produced some interesting results when it came to the best front end frameworks. Angular remains the most established tool with 40% of web developers reporting that they used it regularly. React is actually a little further behind, with 25% using it regularly. Similarly, Vue.js is growing but used by 20% of respondents. However, opinion was a little different when we asked what front end frameworks should win the battle of the 3 big front end tools. Respondents were split on Angular and React, with both JavaScript tools winning 34% of the vote. Vue wasn't far behind, at just over 30%. With the web development world apparently split over what framework is going to define the future of the field, how are we to pick them apart? Or do we even really need to worry? Read the report in full. Sign up to our weekly newsletter and download the PDF for free. The fact that we have three great front end tools jostling for developer attention is surely a good thing, right? To help us make sense of these trends, we caught up with Angular expert Minko Gechev to find out what he makes of web development in 2018, and what front end developers should be learning. Minko Gechev is the author of Switching Angular. You can find the latest edition here on the Packt Store. Which front end framework should you learn: Angular, React, or Vue? Respondents to the Skill Up survey were evenly split between Angular, React, and Vue in the 'battle of the frameworks'. Which do you think developers should learn, and why? In all of them, there are unique and interesting ideas which are worth exploring. I truly believe that learning all the major frameworks can help developers go to the next level! This doesn’t necessary mean to be proficient in all of them. Having a high-level understanding of how the frameworks work and how to use them is completely enough and will allow you to adapt according to the projects’ requirements. This is similar to learning programming languages from different paradigms – it helps you discover how problems are being solved in different ways. For the past a couple of years, the redux pattern became the de facto standard for state management in the modern front-end development. The good thing about redux is that it’s view agnostic so you can use it with any framework – Vue, React, Angular, etc. Angular has its own redux alternative called ngrx which empowers a declarative approach with RxJS but in general, it follows the same underlying pattern. My recommendation would be to understand how to manage the state of our applications because that’s probably the most complex problem that we’re solving in our day to day development process. Once we have a solid understanding of this, we can easily switch between different frameworks depending on the problems we’re solving, what the rest of the team is using, and the project’s requirements. A very interesting characteristic of learning Angular is that if we get comfortable with the framework we’d be also familiar with TypeScript, RxJS, and techniques such as dependency injection. This may look like an initial overhead but it’s a great long-term investment which pays off really well in large projects. How important is TypeScript to front end development? How important is TypeScript to modern web development? Why? Over the past a couple of years I see a strong increase in the excitement around the language, not only in the Angular world but also in React and Vue. I’m personally using TypeScript for a few projects – a platform that we built with React and an educational application written in Angular. I see a lot of value in using TypeScript. Recently I haven’t started any project with JavaScript – for everything new I’m using TypeScript and I’m trying to migrate, as many of my existing projects as possible.There are a few reasons for this: TypeScript provides great development experience! Especially, combined with VSCode, you can instantly notice when you’ve misspelled a property, method, you’re trying to access a property of a nullable value, etc. It gives you a sense of security that your program is correct to given extent. Of course, TypeScript cannot save us from logical mistakes but if we use its type system wisely, we can get great benefits. You might be curious what benefits? Well, TypeScript can help us reduce the number of bugs in our programs. In the study “To Type or Not to Type: Quantifying Detectable Bugs in JavaScript” the authors shown that the average JavaScript program can benefit with 15% bug reduction if it uses the type system of TypeScript. The study was using TypeScript version 2.0; with the latest features of the language the number of detectable bugs is growing dramatically. Since recently webpack is leveraging TypeScript as well because it helps discover already existing issues in the codebase. Web developers and JavaScript fatigue Do you think we're past web developers experiencing 'JavaScript fatigue'? JavaScript is very dynamic and it moves very quickly. There are a lot of potential issues which could be caused by a variety of reasons. With semantic versioning and powerful type systems (such as the type system of TypeScript), we’re walking in the right direction but we definitely have a long way to go until the ecosystem matures. Web development over the next 12 months: WebAssembly and machine learning What do you think will be the most important thing for developers to learn in the next 12 months? There are a lot of exciting things happening nowadays! Web browsers are getting more and more powerful, exposing hundreds of APIs and opportunities. WebAssembly is moving very quickly and I believe that together with Rust it has a lot of potential in future. On the other hand, Google recently announced TensorFlow.js. This is a library which allows us to use machine learning (ML) in the browser. In the next years, ML is going to take a larger portion of our development process (directly or indirectly) for: Implementing features in our applications Improving development process I’m specifically interested in the second point – improving our development process by using ML. Together with Addy Osmani, Kyle Mathews, and Katie Hempenius, we’ve been working on a toolkit called Guess.js. It aims to provide predictive bundling and pre-fetching based on ML techniques in order to let us develop faster Angular/React/Vue/etc. applications. I’m really excited about what’s coming up in near future! So are we! Thanks for taking the time to speak to us, Minko!
Read more
  • 0
  • 0
  • 5343
article-image-pandas-answers-data-analysis-problems-interview
Amey Varangaonkar
24 Apr 2018
9 min read
Save for later

“Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou

Amey Varangaonkar
24 Apr 2018
9 min read
It comes as no surprise to many developers, Python has grown to become the preferred language of choice for data science. One of the reasons for its staggering adoption in the data science community is the rich suite of libraries for effective data analysis and visualization - allowing you to extract useful, actionable insights from your data. Pandas is one such Python-based library, that provides a solid platform to carry out high-performance data analysis. Ted Petrou is a data scientist and the founder of Dunder Data, a professional educational company focusing on exploratory data analysis. Before founding Dunder Data, Ted was a data scientist at Schlumberger, a large oil services company, where he spent the vast majority of his time exploring data. Ted received his Master’s degree in statistics from Rice University and has used his analytical skills to play poker professionally. He taught math before becoming a data scientist. He is a strong supporter of learning through practice and can often be found answering questions about pandas on Stack Overflow. In this exciting interview, Ted takes us through an insightful journey into pandas - Python’s premier library for exploratory data analysis, and tells us why it is the go-to library for many data scientists to discover new insights from their data. Key Takeaways Data scientists are in the business of making predictions. To make the right predictions you must know how to analyse your data. to perform data analysis efficiently, you must have a good understanding of the concepts as well be proficient using the tools like pandas. Pandas Cookbook contains step by step solutions to the master the pandas syntax while going through the data exploration journey (missteps et al) to solve the most common and not-so-common problems in data analysis. Unlike R which has several different packages for different data science tasks, pandas offers all data analysis capabilities as a single large Python library. Pandas has good time-series capabilities, making it well-suited for building financial applications. That said, its best use is in data exploration - to find interesting discoveries within the data. Ted says beginners in data science should focus on learning one data science concept at a time and master it thoroughly, rather than getting an overview of multiple concepts at once. Let us start with a very fundamental question - Why is data crucial to businesses these days? What problems does it solve? All businesses, from a child’s lemonade stand to the largest corporations, must account for all their operations in order to be successful. This accounting of supplies, transactions, people, etc., is what we call ‘data’ and gives us historical records of what has transpired in a business. Without this data, we would be reduced to oral history or what humans used for accounting before the advent of writing systems. By collecting and analyzing data, we gain a deeper understanding of how the business is progressing. In the most basic instances, such as with a child’s lemonade stand, we know how many glasses of lemonade have been sold, how much was spent on supplies, and importantly whether the business is profitable. This example is incredibly trivial, but it should be noted that such simple data collection is not something that comes naturally to humans. For instance, many people have a desire to lose weight at some point in their life, but fail to accurately record their daily weight or calorie intake in any regular manner, despite the large number of free services available to help with this. There are so many Python-based libraries out there which can be used for a variety of data science tasks. Where does pandas fit into this picture? pandas is the most popular library to perform the most fundamental tasks of a data analysis. Not many libraries can claim to provide the power and flexibility of pandas for working with tabular data. How does pandas help data scientists in overcoming different challenges in data analysis? What advantages does it offer over domain-specific languages such as R? One of the best reasons to use pandas is because it is so popular. There are a tremendous amount of resources available for it, and an excellent database of questions and answers on StackOverflow. Because the community is so large, you can almost always get an immediate answer to your problem. Comparing pandas to R is difficult as R is an entire language that provides tools for a wide variety of tasks. Pandas is a single large Python library. Nearly all the tasks capable in pandas can be replicated with the right library in R. We would love to hear your journey as a data scientist. Did having a master's degree in statistics help you in choosing this profession? Also tell us something about how you leveraged analytics in professional Poker! My journey to becoming a “data scientist” began long before the term even existed. As a math undergrad, I found out about the actuarial profession, which appealed to me because of its meritocratic pathway to success. Because I wasn’t certain that I wanted to become an actuary, I entered a Ph.D. program in statistics in 2004, the same year that an online poker boom began. After a couple of unmotivating and half-hearted attempts at learning probability theory, I left the program with a masters degree to play poker professionally. Playing poker has been by far the most influential and beneficial resource for understanding real-world risk. Data scientists are in the business of making predictions and there’s no better way to understand the outcomes of predictions you make than by exposing yourself to risk. Your recently published 'pandas Cookbook' has received a very positive response from the readers. What problems in data analysis do you think this book solves? I worked extremely hard to make pandas Cookbook the best available book on the fundamentals of data analysis. The material was formulated by teaching dozens of classes and hundreds of students with my company Dunder Data and my meetup group Houston Data Science. Before getting to what makes a good data analysis, it’s important to understand the difference between the tools available to you and the theoretical concepts. Pandas is a tool and is not much different than a big toolbox in your garage. It is possible to master the syntax of pandas without actually knowing how to complete a thorough data analysis. This is like knowing how to use all the individual tools in your toolbox without knowing how to build anything useful, such as a house. Similarly, understanding theoretical concepts such as ‘split-apply-combine’ or ‘tidy data’ without knowing how to implement them with a specific tool will not get you very far. Thus, in order to make a good data analysis, you need to understand both the tools and the concepts. This is what pandas Cookbook attempts to provide. The syntax of pandas is learned together with common theoretical concepts using real-world datasets. Your readers loved the way you have structured the book and the kind of datasets, examples and functions you have chosen to showcase pandas in all its glory. Was is experience, intuition, or observations that led to this fantastic writing insight? The official pandas documentation is very thorough (well over 1,000 pages) but does not present the features as you would see them in a real data analysis. Most of the operations are shown in isolation on contrived or randomly generated data. In a typical data analysis, it is common for many pandas operations to be called one after another. The recipes in pandas Cookbook expose this pattern to the reader, which will help them when they are completing an actual data analysis. This is not meant to disparage the documentation as I have read it multiple times myself and recommend reading it along with pandas Cookbook. Quantitative finance is one domain where pandas finds major application. How does pandas help in developing better financial applications? In what other domains does pandas find important applications and how? Pandas has good time-series capabilities which makes it well-suited for financial applications. It’s ability to group by specific time periods is a very useful feature. In my opinion, pandas most important application is with exploratory data analysis. It is possible for an analyst to quickly use pandas to find interesting discoveries within the data and visualize the results with either matplotlib or Seaborn. This tight integration, coupled with the Jupyter Notebook interface make for an excellent ecosystem for generating and reporting results to others. Please tell us more about 'pandas Cookbook'. What in your opinion are the 3 major takeaways from it? Are there any prerequisites needed to get the most out of the book? The only prerequisite for pandas Cookbook is a fundamental understanding of the Python programming language. The recipes progress in difficulty from chapter to chapter and for those with no pandas experience, I would recommend reading it cover to cover. One of the major takeaways from the book is to be able to write modern and idiomatic pandas code. Pandas is a huge library and there are always multiple ways of completing each task. This is more of a negative than a positive as beginners notoriously write poorly written and inefficient code. Another takeaway is the ability to probe and investigate data until you find something interesting. Many of the recipes are written as if the reader is experiencing the discovery process alongside the author. There are occasional (and purposeful) missteps in some recipes to show how often the right course of action is not always known. Lastly, I wanted to teach common theoretical concepts of doing a data analysis while simultaneously learning pandas syntax. Finally, what advice would you have for beginners in data science? What things should they keep in mind while designing and developing their data science workflow? Are there any specific resources which they could refer to, apart from this book of course? For those just beginning their data science journey, I would suggest keeping their ‘universe small’. This means concentrating on as few things as possible. It is easy to get caught up with a feeling that you need to keep learning as much as possible. Mastering a few subjects is much better than having a cursory knowledge of many. If you found this interview to be intriguing, make sure you check out Ted’s pandas Cookbook which presents more than 90 unique recipes for effective scientific computation and data analysis.    
Read more
  • 0
  • 1
  • 5298

article-image-understanding-the-fundamentals-of-analytics-teams-with-john-k-thompson
Expert Network
06 Apr 2021
6 min read
Save for later

Understanding the Fundamentals of Analytics Teams with John K. Thompson

Expert Network
06 Apr 2021
6 min read
Key-takeaways:   Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy.  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity.  Managing a successful analytics team and individual analytics professionals is different than managing any other type of team.  Data and analytics will be ubiquitous in the very near future. Analytics teams are different than any other team in the organization and analytics professionals are unique variant of creative professionals. Providing challenging, interesting and valuable work in the form of a personal project portfolio of work for a data scientist can be done and needs to be done to ensure productivity, job satisfaction, value delivery, and retention.  We interviewed Analytics Leader, and bestselling author, John K Thompson on data analytics, the future of analytics and his recent book, Building Analytics Teams. The interview in detail:  1. What are the fundamental concepts of building and managing a high-performing analytics team?  It is critically important to remember that data scientists are creative and intelligent people. They cannot be managed well in a command-and-control environment.  Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy. If they have a portfolio of projects and can manage their time and effort, the productivity of the team will be much higher than what is typically seen in teams managed in a traditional manner.  The relationship of the analytics leader with their peers and executives of the company is critically important to the success of the analytics team.  It is very important to realize that most analytics project fail at the point of where analytical models are to be implemented in production systems. 2. Tell us about your book, Building Analytics Teams. How is your book new and/or different from other books on Data Analytics?   Building Analytics Teams is focused on the practical challenges faced by people who are building and managing high performance analytics teams and the staff members who make up those analytics teams.    The book is different from other books in that it examines the process of building and managing a team from a holistic view.  The book considers the organization framework, the required processes, the people, the projects, the problems, and pitfalls.    The content of the book guides the reader through how to navigate these challenges and provides illustrations and examples of how to be successful.  The book is a “how to” guide on how to successfully manage the analytics process in a large corporate environment.  3. What was the motivation behind writing this book?   I have not seen a book like this, and I wish I had a book like this earlier in my career.  I have built a number of analytics teams. While building and growing those teams, I noticed certain recurring patterns. I wanted to address the misconceptions and the misperceptions people hold about analytics teams.    Analytics teams are unique. The team members who are successful have a different mindset and attitude toward project work and team work. I wanted to communicate the differences inherent in a high-performance analytics team when compared to other teams.  Also, I wanted to communicate that managing a successful analytics team and individual analytics professionals is different than managing any other type of team.    I wanted to write a guide for managers and analytics professionals to help them understand how the broader organization views them and how they can interface and interact with their peers in related organizational functions to increase the probability of joint success.  4. What should be the starting point for data analytics enthusiasts aiming to begin their journey in Data Analytics? How do you think your book will help them in their journey?  It depends on where they are starting their journey.    If they are in the process of completing their undergraduate or graduate studies, I would suggest that they take classes in programming, data science or analytics.    If they are professionals, I would suggest that they take classes on Coursera, Udemy or any other on-line educational platform to see if they have a real interest in, and affinity for, analytics.  If they do have an interest, then they should start working on analytics for themselves to test out analytical techniques, apply critical thinking and try to understand what they can see or cannot see in the data.  If that works out and their interest remains, they should volunteer for projects at work that will enable them to work with data and analytics in a work setting.  If they have the education, the affinity and the skill, then apply for a data science position.  Grab some data and make a difference!  5. What are the key skills required for someone to be successful working in Data Analytics? What are the pain points/challenges one should know?  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity. Without curiosity, you will find it difficult to be successful as a data scientist.  It helps to be talented and well educated, but I have met many stellar data scientists that are neither.  Beyond those traits, it is more important to be diligent and persistent.  The most successful business analysts and data scientists I have ever worked with were all naturally and perpetually curious and had a level of diligence and persistence that was impressive.  As for pain points and challenges; data scientists need to work on improving their listening skills, their written & verbal communication and presentation skills.  All data scientists need improvement in these areas.  6. What is the future of analytics? What will we see next?  I do believe that we are entering an era where data and analytics will be increasing in importance in all human endeavors. Certainly, corporate use of data and analytics will increase in importance, hence the focus of the book.    But beyond corporations, the active and engaged use of data and analytics will increase in importance and daily use in managing multiple aspects of - people’s personal lives, academic pursuits, governmental policy, military operations, humanitarian aid, tailoring of products and services; building of roads, towns and cities, planning of traffic patterns, provisioning of local federal and state services, intergovernmental relationships and more.    There will not be an element of human endeavor that will not be touched and changed by data and analytics. Data is ubiquitous today and data and analytics will be ubiquitous in the very near future.  We will see more discussions on who owns data and who should be able to monetize data.  We will experience increasing levels of AI and analytics across all systems that we interact with, and most of it will be unnoticed and operate in the background for our benefit.  About:  John K. Thompson is an international technology executive with over 30 years of experience in the business intelligence and advanced analytics fields. Currently, John is responsible for the global Advanced Analytics and Artificial Intelligence team and efforts at CSL Behring. 
Read more
  • 0
  • 0
  • 5298