Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

121 Articles
article-image-if-tech-building-future-make-inclusive-representative-all-society-interview-charlotte-jee
Packt
27 Feb 2018
5 min read
Save for later

‘If tech is building the future, let's make that future inclusive and representative of all of society’ – An interview with Charlotte Jee

Packt
27 Feb 2018
5 min read
Charlotte Jee is a journalist specialising in technology and politics, currently working as editor for Techworld. Charlotte has founded, hosted and moderated a range of events including The Techies awards and her own ‘Women in Tech Speak Up’ event series. In 2017, Charlotte set up Jeneo with the mission to give women and underrepresented people a voice at Tech events. Jeneo helps companies source great individuals to speak at their events, and encourage event organisers to source panellists from a range of backgrounds. We spoke to Charlotte in a live Twitter Q&A to find out a bit more about Jeneo and Charlotte’s experience as a woman in the tech sector. Packt: Why did you decide to set up Jeneo? CJ: Good question! Basically, blame manels (aka all-male panels). I got tired of going to tech events and not seeing even one woman speaking. It's symptomatic of a wider issue with diversity within tech. And I resolved to start to do something about it! Packt: What have been your most interesting findings from the research you've carried out into women at tech events? CJ: Interestingly, the ratio of women to men speakers varies hugely across different events. However, promisingly, I've found that those who have worked on this specific issue have successfully upped their number of women speakers. The general finding was that from the top 60 London tech events, women comprise about a quarter of the speakers. The full research hasn't concluded just yet – so stay tuned. Packt: How has the industry changed since you started in Tech? Do you think it is getting easier for women to get into the Tech world? CJ: To be honest, I don't think that much progress has been made since I started working in tech. However, in the last year with scandals at Uber, Google and VC sexual harassment, the industry is starting to finally really focus on making itself more welcoming for women. Packt: What do you think is the best part of being a woman in the tech industry? CJ: I think the answer is the same regardless of your gender – the tech industry is at the forefront of the latest innovations within society and it is constantly changing, so you never get bored. Packt: What do you think is the worst part of being a woman in the tech industry? CJ: To be clear, I believe the majority of workplaces within tech are perfectly welcoming to women – however for the persistent minority that aren't, women can face all sorts of discrimination, both subtle and unsubtle. Packt: What advice would you give to a woman considering a career in the tech industry? CJ: Go for it! I honestly can't think of a better industry to work in. You can pretty much guarantee you'll be able to find work no matter what, so long as you keep your skills up to date. Packt: In what ways – positive or negative ­– do you think our education system influences the number of women in tech? CJ: Good question. I actually think that it's in education that this problem starts – from the experts I've spoken to, it seems like our education system too often seems to actively discourage women from pursuing careers in tech. Packt: What has been your biggest success in your tech career so far? CJ: I was really honoured when my boss promoted me to TechWorld News editor in 2015. Recently, it'd have to be the Women in Tech Speak Up event for 300 people I organised (single-handedly) in August 2017. Also, creating this list of 348 women working in tech in the UK which in many ways kicked this all off. Packt: What advice would you give to help tech companies to help increase their gender diversity? CJ: Way too much for one tweet here. Get senior buy-in, look carefully at your hiring process, be clear on culture/working practices, collect and publish data, provide flexible working (men want this too!) – if you want to more detail please get in touch with me. Packt: What do you think the future looks like for women working in tech? CJ: I feel more optimistic now than I have at any point. I think there is a huge amount of desire within the industry to be more inclusive. However, it's translating that goodwill into action – that's what I'll be focusing on. Packt: Are there any tech companies doing awesome things to increase diversity in their business? CJ: There's some impressive work from Monzo at the moment, who are highlighting the need for a diverse and inclusive team that represents their user base. There are plenty of others doing good work too, including Amazon UK. Packt: Are there any particular women in tech who have inspired you and who should we be following on Twitter? CJ:How long do you have? So many: @emercoleman @kitterati @ChiOnwurah @NAUREENK @annkempster @cathywhite10 @carrie_loves_ @JeniT @lily_dart @annashipman @yoditstanton That is truly just for starters. And of course, I have to add, the original woman in tech who inspired me is my Mum, Jane Jee (@janeajee) – she's CEO of RegTech startup Kompli-Global Compliance (@kompliglobal). Plus, obviously everyone on this list. Packt: Do you have any tips or advice for women to get on panels or bag speaking slots at tech events? CJ: Start small if possible, build up your conference – don't be afraid to put yourself out there! My research has found speaker submissions overwhelmingly come from men, get proactive and contact events you'd like to speak at. Thanks for chatting with us, Charlotte! Find out more about Jeneo here and follow Charlotte on Twitter: @CharlotteJee.
Read more
  • 0
  • 0
  • 3859

article-image-cybersecurity-researcher-elliot-alderson-talks-trump-and-facebook-google-and-huawei-and-teaching-kids-online-privacy-podcast
Richard Gall
08 Aug 2019
3 min read
Save for later

Cybersecurity researcher "Elliot Alderson" talks Trump and Facebook, Google and Huawei, and teaching kids online privacy [Podcast]

Richard Gall
08 Aug 2019
3 min read
For anyone that's watched Mr. Robot, the name Elliot Alderson will sound familiar. However, we're not talking about Rami Malek's hacker alter ego - instead, the name has been adopted as an alias by a real-life white-hat hacker who has been digging into the dark corners of the wild and often insecure web. Elliot's real name is Baptiste Robert (whisper it...) - he was kind enough to let us peak beneath the pseudonym, and spoke to us about his work as a cybersecurity researcher and what he sees as the biggest challenges in software security today. Listen: https://soundcloud.com/packt-podcasts/cybersecurity-researcher-elliot-alderson-on-fighting-the-good-fight-online "Elliot Alderson" on cybersecurity, politics, and regulation In the episode we discuss a huge range of topics, including: Security and global politics Is it evolving the type of politics we have? Is it eroding trust in established institutions? Google’s decision to remove its apps from Huawei devices The role of states and the role of corporations Who is accountable? Who should we trust? Regulation Technological solutions What Elliot Alderson has to say on the podcast episode... On Donald Trump's use of Facebook in the 2016 presidential election: “We saw that social networks have an impact on elections. Donald Trump was able to win the election because of Facebook - because he was very aggressive on Facebook and able to target a lot of people…”  On foreign interference in national elections: “We saw, also, that these tools… have been used by countries… in order to manipulate the elections of another country. So as a technician, as a security researcher, as an infosec professional, you need to ask yourself what is happening - can we do something against that? Can we create some tool? Can we fight this phenomenon?” How technology professionals and governing institutions should work together: “We should be together. This is the responsibility of government and countries to find vulnerabilities and to ensure the security of products used by its citizens - but it’s also the responsibility of infosec professionals and we need to work closely with governments to be sure that nobody abuses vulnerabilities out there…” On teaching the younger generation about privacy and protecting your data online: “I think government and countries should teach young people the value of personal data… personally, as a dad, this is something I’m trying to teach my kids - and say okay, this website is asking you your personal address, your personal number, but do they need it? ...In a lot of cases the answer is quite obvious: no, they don’t need it.” On Google banning Huawei: “My issue with the Huawei story and the Huawei ban is that as a user, as a citizen, we are only seeing the consequences. Okay, Google ban Huawei - Huawei is not able to use Google services. But we don’t have the technical information behind that.” On the the importance of engineering ethics: “If your boss is coming to you and saying ‘I would like to have an application which is tracking people during their day to day work’ what is your decision? As developers, we need to say ‘no: this is not okay. I will not do this kind of thing’”. Read next: Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry Follow Elliot Alderson on Twitter: @fs0c131y
Read more
  • 0
  • 0
  • 3743

article-image-listen-herman-fung-explains-what-its-like-to-manage-programmers-and-software-engineers-podcast
Richard Gall
13 Nov 2019
2 min read
Save for later

Listen: Herman Fung explains what its like to manage programmers and software engineers [Podcast]

Richard Gall
13 Nov 2019
2 min read
Management is a discipline that isn't short of coverage. In fact, it's probably fair to say that the world throws too much attention its way. This only serves to muddy the waters of management principles and make it hard to determine what really matters. To complicate things, technology is ripping up the rule book when it comes to processes and hierarchies. Programming and engineering are forcing management gurus to rethink what it means to 'manage' today. However, while the wealth of perspectives on modern management amount to a bit of a cacophony, looking specifically at what it means to be a manager in a world defined by software can be useful. That's why, in the latest episode of the Packt Podcast, we spoke to Herman Fung. Herman is someone with both development and management experience, and, following the publication of his book The Successful Software Manager earlier this year, he's been spending a lot of time seriously considering what it means to be a software manager. Listen to the podcast episode: https://soundcloud.com/packt-podcasts/what-does-it-mean-to-be-a-software-manager-herman-fung-explains Some of the topics covered in this episode include: How to approach software management if you haven't done it before The impact of Agile and DevOps What makes managing in the context of engineering and technology different from other domains The differences between leadership and management You can buy The Successful Software Manager from the Packt store as a print or eBook. Click here. Follow Herman on Twitter: @FUNG14
Read more
  • 0
  • 0
  • 3702

article-image-sports-analytics-empowering-better-decision-making
Amey Varangaonkar
14 Nov 2017
11 min read
Save for later

Expert Insights: How sports analytics is empowering better decision-making

Amey Varangaonkar
14 Nov 2017
11 min read
Analytics is slowly changing the face of the sports industry as we know it. Data-driven insights are being used to improve the team and individual performance, and to get that all-important edge over the competition. But what exactly is sports analytics? And how is it being used? What better way to get answers to these questions than asking an expert himself! [author title="Gaurav Sundararaman"]A Senior Stats Analyst at ESPN currently based in Bangalore, India. With over 10 years of experience in the field of analytics, Gaurav worked as a Research Analyst and a consultant in the initial phase of his career. He then ventured into sports analytics in 2012, and played a major role in the Analytics division of SportsMechanics India Pvt. Ltd. where he was the Analytics Consultant for the T20 World Cup winning West Indies team in 2016.[/author]   In this interview, Gaurav takes us through the current landscape of sports analytics, and talks about how analytics is empowering better decision-making in sports. Key Takeaways Sports analytics pertains to finding actionable, useful insights from sports data which teams can use to gain competitive advantage over the opposition Instincts backed by data make on and off-field decisions more powerful and accurate Rise of IoT and wearable technology has boosted sports analytics. With more data available for analysis, insights can be unique and very helpful Analytics is being used in sports right from improving player performance to optimizing ticket prices and understanding fan sentiments Knowledge of  tools for data collection, analysis and visualization such as R, Python and Tableau is essential for a sports analyst Thorough understanding of the sport, up to date skillset and strong communication with players and management are equally important factors to perform efficient analytics Adoption of analytics within sports has been slow, but steady. More and more teams are now realizing the benefits of sports analytics and are adopting an analytics-based strategy Complete Interview Analytics today is finding widespread applications in almost every industry today - how has the sports industry changed over the years? What role is analytics playing in this transformation? The sports industry has been relatively late in adopting analytics. That said, the use of analytics in sports has also varied geographically. In the west, analytics plays a big role in helping teams, as well as individual athletes, take up decisions. Better infrastructure and a quick adoption of the latest trends in technology is an important factor here. Also, investment in sports starts from a very young age in the west, which also makes a huge difference.  In contrast, many countries in Asia are still lagging behind when it comes to adopting analytics, and still leverage on traditional techniques to solve problems. A combination of analytics with traditional knowledge from experience would go a long way in helping teams, players and businesses succeed. Previously the sports industry was a very close community. Now with the advent of analytics, the industry has managed to expand its horizon. We witness more non-sportsmen playing a major part in the decision making. They understand the dynamics of the sports business and how to use data-driven insights to influence the same. Many major teams across different sports such as Football (Soccer), Cricket, American Football, Basketball and more have realized the value of data and analytics. How are they using it? What advantages does analytics offer to them? One thing I firmly believe is that analytics can’t replace skills or can’t guarantee wins. What it can do is ensure there is logic towards certain plans and decisions. Instincts backed by data make the decisions more powerful. I always tell the coaches or players – Go with your gut and instincts as Plan A. If it does not work out your fall back could be Plan B based on trends and patterns derived from data. It turns out to be a win-win for both. Analytics offers a neutral perspective which sometimes players or coaches may not realize. Each sport has a unique way of applying analytics to make decisions and obviously, as analysts, we need to understand the context and map the relevant data. As far as using the analytics is concerned, the goals are pretty straightforward - be the best, beat the opponents and aim for sustained success. Analytics helps you achieve each of these objectives. The rise of IoT and wearable technology over the last few years has been incredible. How has it affected sports, and sports analytics, in particular? It is great to see that many companies are investing in such technologies. It is important to identify where wearables and IoT can be used in sport and where it can cause maximum impact. These devices allow in-game monitoring of players, their performance, and their current physical state. Also, I believe more than on-field, these technologies would be very useful in engaging fans as well. Data derived from these devices could be used in broadcasting as well as providing a good experience for fans in the stadiums. This will encourage more and more people to watch games in stadiums and not in the comfort of their homes. We have already seen a beginning with a few stadiums around the world leveraging technology (IoT). The Mercedes Benz stadium (home of Atlanta Falcons) has a high tech stadium powered by IBM. Sacramento is building a state-of-the-art facility for the Sacramento Kings. This is just the start, and it will only get better with time. How does one become a sports analyst? Are there any particular courses/certifications that one needs to complete in order to become one? Can you share with us your journey in sports analytics? To be honest there are no professional courses yet in India to become an Analyst. There are a couple of colleges which have just started offering Sports Analytics as a course in their Post-Graduation Program. However, there are a few companies (Sports Mechanics and Kadamba Technologies in Chennai) that offer jobs that can enable you to become a Sports Analyst if you are really good.  If you are a freelancer then my advice would be to ensure you brand yourself well and showcase your knowledge through social media platforms and get a breakthrough via contacts. Post my MBA, Sports Mechanics (a leader in this space), a company based in Chennai were looking for someone to work to start their data practice. I was just lucky to be at the right place at the right time. I worked for 4 years there and was able to learn a lot about the industry and what works and what does not. Being a small company, I was lucky to don multiple hats and work on different projects across the value chain. I moved and joined the lovely team Of ESPNCricinfo where I work for their stats team. What are the tools and frameworks that you use for your day to day tasks? How do they make your work easier? There are no specific tools or frameworks. It depends on the enterprise you are working for. Usually, they are proprietary tools of the company. Most of these tools are used either to collect, mine or visualize data. Interpreting the information and presenting it in a manner in which users understand is important and that is where certain applications or frameworks are used. However to be ready for the future it would be good to be skilled on tools that support data collection, analysis and visualization namely R, Python and Tableau, to name a few. Do sports analysts have to interact with players and the coaching staff directly? How do you communicate your insights and findings with the relevant stakeholders? Yes, they have to interact with players and management directly. If not, the impact will be minimal. Communicating insights is very important in this industry. Too much analysis could lead to paralysis. We need to identify what exactly each player or coach is looking for, based on their game and try to provide them the information in a crisp manner which helps them make decisions on and off the field. For each stakeholder the magnitude of the information provided is different. For the coach and management, the insights can be in detail while for the players we need to keep it short and to the point. The insights you generate must not only be limited to enhancing the performance of a team on the field but much more than that. Could you give us some examples? Insights can vary. For the management, it could deal with how to maximise the revenue or save some money in an auction. For coaches, it could help them know about his team’s as well as the opposition’s strengths and weaknesses from a different perspective. For captains, data could help in identifying some key strategies on the field. For example, in Cricket, it could help the captain determine which bowler to bring on to which opposition batsmen, or where to place the fielders. Off the field, one area where analytics could play a big role would be in grassroots development and tracking of an athlete from an early age to ensure he is prepared for the biggest stage. Monitoring performance, improving physical attributes by following a specific regimen, assessing injury record and designing specific training programs, etc. are some ways in which this could be done. What are some of the other challenges that you face in your day to day work? Growth in this industry can be slow sometimes. You need to be very patient, work hard and ensure you follow the sport very closely. There are not many analytical obstacles as such, but understanding the requirements and what exactly the data needs are can be quite a challenge. Despite all the buzz, there are quite a few sports teams and organizations who are still reluctant to adopt an analytics-based strategy – why do you think is that the case? What needs to change? The reason for the slow adoption could be the lack of successful case studies and the awareness. In most sports when so many decisions are taken on the field sometimes the players' ability and skill seems far more superior to anything else. As more instances of successful execution of data-based trends come up, we are likely to see more teams adopting the data-based strategy. Like I mentioned earlier, analytics needs to be used to make the coach and captain take the most logical and informed decisions. Decision-makers need to be aware of the way it is used and how much impact it can cause.  This awareness is vital towards increasing the adoption of analytics in sports. Where do you see sports analytics in the next 5-10 years? Today in sports many decisions are taken on gut feeling, and I believe there should be a balance. That is where analytics can help. In sports like Cricket, only around 30% of the data is used and there is more emphasis given to video. Meanwhile, if we look at Soccer or Basketball, the usage of data and video analytics is close to 60-70% of its potential. Through awareness and trying out new plans based on data, we can increase usage of analytics in cricket to 60-70 % in the next few years. Despite the current shortcomings, It is fair to say that there is a progressive and positive change at the grassroots level across the world. Data-based coaching and access to technology are slowly being made available to teams as well as budding sportsmen/women. Another positive is that the investment in the sports industry is growing steadily. I am confident that in a couple of years, we will see more job opportunities in sports. Maybe in five years, the entire ecosystem would be more structured and professional. We would witness analytics playing a much bigger role in helping stakeholders make informed decisions, as data-based insights become even more crucial. Lastly, what advice do you have for aspiring sports analysts? My only advice would be - Be passionate, build a strong network of people around you, and constantly be on the lookout for opportunities. Also, it is important to keep updating your skill-set in terms of the tools and techniques needed to perform efficient and faster analytics. Newer and better tools keep coming up very quickly, which make your work easier and faster. Be on the lookout for such tools! One also needs to identify their own niche based on their strengths and try to build on that. The industry is on the cusp of growth and as budding analysts, we need to be prepared to take off when the industry matures. Build your brand and talk to more people in the industry - figure out what you want to do to keep yourself in the best position to grow with the industry.
Read more
  • 0
  • 0
  • 3639

article-image-selenium-and-data-driven-testing-an-interview-with-carl-cocchiaro
Richard Gall
17 Apr 2018
3 min read
Save for later

Selenium and data-driven testing: An interview with Carl Cocchiaro

Richard Gall
17 Apr 2018
3 min read
Data-driven testing has become a lot easier thanks to tools like Selenium. That's good news for everyone in software development. It means you can build better software that works for users much more quickly. While the tension between performance and the need to deliver will always remain, it's thanks to efforts of developers to improve testing tools that we are where we are today. We spoke to Carl Cocchiaro about data driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. He also talked to us about his book and why it's a useful resource for web developers interested in innovations in software testing today. What is data-driven testing? Packt: Tell us a little bit about data-driven testing. Carl Cocchiaro: Data-Driven Testing has been made very easy with technologies like Selenium and TestNG. Users can annotate test methods and add attributes like Data Providers and Groupings to them, allowing users to iterate through the methods with varying data sets. The key features Packt: What are the 3 key features of Selenium that makes it worth people's attention? CC: Platform independence, its support for multiple programming Languages, and its grid architecture that's really useful for remote testing. Packt: Could someone new to Java start using Selenium? Or are there other frameworks? CC: Selenium WebDriver is an API that can be called in Java to test the elements on a Browser or Mobile page. It is the Gold Standard in test automation, everyone should start out learning it, it's pretty fun to use. What are the main challenges of moving to Selenium? Packt: What are the main challenges someone might face when moving to the framework? CC: Like anything else, the language syntax has to be learned in order to be able to test the applications. Along with that, the TestNG framework coupled with Selenium has lots of features in Data-Driven Testing, and there's a learning curve on both. How to learn Selenium Packt: How is your book a stepping stone for a new Selenium developer? CC: The book details how to design and develop a Selenium Framework from scratch and how to build in Data-Driven Testing using TestNG and a Data Provider class. It's complex from the start but has all the essentials to create a great testing framework. They should get the basics down first before moving towards other types of testing like performance, REST API, and Mobile. Packt: What makes this book a must-have for anyone interested in or working with the tool? CC: Many Selenium guides are geared towards getting users up and running, but this is an advanced guide that teaches all the tricks and techniques I've learned over 30 years. Packt: Can you give people 3 reasons why they should read your book? CC: It's a must-read if designing and developing new frameworks, it circumvents all the mistakes users make in building frameworks, and you will be a Selenium Rockstar at your company after reading it! Learn more about software testing:  Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 3601

article-image-red-badger-tech-director-viktor-charypar-talks-monorepos-lifelong-learning-and-the-challenges-facing-open-source-software-interview
Richard Gall
10 May 2019
7 min read
Save for later

Red Badger Tech Director Viktor Charypar talks monorepos, lifelong learning, and the challenges facing open source software [Interview]

Richard Gall
10 May 2019
7 min read
Back in February, Viktor Charypar, Tech Director at Red badger explained the benefits of using a monorepo. For many teams, especially those without the resources or a highly developed and well-supported engineering culture, the idea of a monorepo might sound a little strange - following on from this piece, I spoke to Viktor to get a little bit more detail on the benefits of a monorepo and why engineering teams should seriously consider using them. But I didn't just speak to him about monorepos - I was also interested in how Red Badger builds a forward thinking engineering culture that can empower its clients, and how the team embraces continuous learning to ensure everyone is on top of the trends and tools that are going to be impacting digital transformation in the future. So, let's take a look at what Viktor had to say... Why monorepos now? Richard Gall: Why a monorepo now? If you’re dealing with multiple microservices doesn’t it make sense to separate source code? Viktor Charypar: It would seem to make sense, but microservices architecture actually introduces a new level of complexity that needs to be managed, which monorepos make much easier. The main issue is in dependencies between the services and the contracts they agree to exchange data. If one side of the contract changes in an incompatible way without the other side adapting to that change, the system no longer works. This problem grows with the number of services in the system and so it’s especially prominent in microservices architectures. Managing each service’s source code in a separate repository makes it more difficult to understand its ties to the rest of the system. This forces you to adopt some kind of external versioning scheme, such as semantic versioning, to express which revisions of services work together as things change. These versions are decided by engineers manually as changes are made according to a set of rules, and then the dependent services are updated to refer to the latest version of the service they consume, when they are changed to be compatible. This is time-consuming and error-prone. In a monorepo, all the components of the system are versioned together and changes can be made across the system. This not only means an external versioning scheme is not required, but it also makes it easier to test and enforce contracts between services. Monorepos really come to their own when they are coupled with a Continuous Integration system aware of the dependencies between components in the repo. Given a change made by a developer and the knowledge of the dependency “graph”, we can deterministically decide which system components can be affected by the change and therefore need to be retested. It is then up to owners of each service to do a level of testing of their dependencies to make sure their behaviour didn’t change significantly enough to break their own functionality. All this is automated and can be executed without human intervention. Humans just make changes to the software and express expectations on their dependencies in terms of contracts and tests. Read next: Mozilla’s updated policies will ban extensions with obfuscated code Digital transformation challenges RG: What common problems are clients coming to you with? VC: In general, our clients recognise they need help with their digital product capabilities, i.e. delivering interesting propositions to their customers as digital products, typically websites and mobile apps. In large enterprise companies, this ability is generally predicated on going through a digital transformation - adopting agile delivery methods, breaking down functional silos and working in cross-discipline, vertically aligned teams that can decide things quickly and adapt to how customers respond to their product offering. Our clients typically come to us with one of a few problems ranging from needing help with product strategy, i.e. what to offer their customers and how to find which of the many ideas have a market fit. Through knowing what to do but struggling to deliver it at pace, all the way to already having a digital product offering, but one which doesn’t perform as expected. Either from the perspective of customer behaviour (e.g. low conversion rate) or from a technical quality perspective, i.e. the website is unstable, struggles under high load, there are long outages, etc. While our strength is traditionally in fast product delivery and quality, we can help across the board, from product strategy to what we call empower and embed - demonstrating how to deliver digital products quickly, sustainably and with high quality, helping to build internal capability and then handing over to them. Essentially we want to help our clients build sustainable businesses. Read next: Linux forms Urban Computing Foundation: Set of open source tools to build autonomous vehicles and smart infrastructure Learning and assessing new software and tools RG: How do you stay on top of new tools? Do you have a learning culture at Red Badger? VC: We absolutely do, from simple day to day things like all engineers being encouraged to pair program to learn from each other or everyone in the company having a yearly training budget as one of the benefits, to doing things like a yearly internal mini conference called Tech Lab for all the engineers to get together and share latest learnings and general experience from projects. We have actually recently published a report which started with an activity at the last Tech Lab, which answers a lot of the questions above. It’s available on our website here (and we’ll also follow it with a series of events). We also run a few regular meetups in London, the biggest being the London React Meetup, which we’ve been hosting regularly for about four years. RG: How do you assess tools? VC: There are a few things we generally look at. The first is obviously experimenting with the tool to work out what it does and how. We’re in a privileged position of starting new projects, often greenfield ones, fairly regularly. We typically use about 80% of tools we know and trust and about 20% of new ones, which we want to try out “in anger” and learn about. We also look at who is behind these tools, which are generally open source, and whether there is momentum behind them and support from the community. Open source software typically goes through a period of rapid innovation and competition in a certain area and then, eventually, the community settles on a few options that work the best and fit the different problems people are trying to solve. The future of open source - is it sustainable? RG: How do you see the future of open source - is it sustainable on its current model? VC: That’s an interesting thing to think about! It seems like the open source model is widely misunderstood as software being built by dedicated developers in their free time. But in reality, most large, popular open source projects are backed by large software companies and people maintain them as their day job - for example, Linux, Kubernetes, React. Even the web standards are set by standard bodies comprised of professionals supported by the major browser vendors. I think the model with a sole maintainer working on something in their spare time doesn’t really work if their project gets very popular and the demand on their time grows. We all know how people tend to behave on the internet and software industry is no exception, so maintainers who do it as a hobby are at a pretty high risk of burning out. For the major open source projects, this seems to be more of an exception, as they are typically maintained by a team of people employed by a company invested in the project. The sponsor benefits from the community contributions and, if the project gets popular, from controlling the direction of a de facto standard and the community benefits from someone else doing the lion’s share of the work. I look at it as being similar to science, where different people publicly contribute to push the boundaries of knowledge, just because pooling resources makes more sense and doesn’t stop any individual contributor from profiting on the results. In that sense, I think it’s a pretty sustainable model and leads to better quality, more versatile software.
Read more
  • 0
  • 0
  • 3600
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-is-devops-really-that-different-from-agile-no-says-viktor-farcic-podcast
Richard Gall
09 Jul 2019
2 min read
Save for later

Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast]

Richard Gall
09 Jul 2019
2 min read
No one can seem to agree on what DevOps really is. Although it's been around for the better part of a decade, it still inspires a good deal of confusion within organizations and across engineering teams. But perhaps we're all over thinking it? To get to the heart of the issues and debates around DevOps, we spoke to Viktor Farcic in the latest episode of the Packt Podcast. Viktor is a consultant at CloudBees, but he's also a prolific author, having written multiple for books for Packt and other publishers. Most recently he helped put together the series of interviews that make up DevOps Paradox, which was published in June. Listen to the podcast here: https://soundcloud.com/packt-podcasts/why-devops-isnt-really-any-different-from-agile-an-interview-with-viktor-farcic Viktor Farcic on DevOps and agile and their importance in today's cloud-native world In the podcast, Farcic talks about a huge range of issues within DevOps. From the way the term itself has been used and misused by technology leaders, to its relationship to containers, cloud, and serverless, he provides some clarifications to what he sees as common misconceptions. What's covered in the podcast: What DevOps means today and its evolution over the last decade Its importance in the context of cloud and serverless DevOps tools Is DevOps a specialized role? Or is it something everyone that writes code should do? How it relates to roles like Site Reliability Engineering (SRE) Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin What Viktor had to say... Viktor had this to say about the multiple ways in which DevOps is interpreted and practiced: "I work with a lot of companies, and every time I visit a company and they say “yes, we are doing DevOps” and I ask them “what is DevOps?” and I always get a different answer." This highlights that some clarification is long overdue when it comes to. Hopefully this conversation will go some way to doing just that...
Read more
  • 0
  • 0
  • 3596

article-image-why-start-learning-spring-book-interview-greg-turnquist
Packt
05 Feb 2018
5 min read
Save for later

Why you should start learning Spring Boot: An interview with Greg Turnquist

Packt
05 Feb 2018
5 min read
If you're not sure what Spring Boot is exactly, or why it's becoming such an important part of the Java development ecosystem you're in the right place. We'll explain: Spring Boot is a micro framework built by the team at Pivotal that has been designed to simplify the bootstrapping and development of new Spring applications. To put it simply, it gets you up and running as quickly as possible. Greg Turnquist has has significant experience with the Spring team at Pivotal for some time - which means he is in the perfect position to give an insight on the software. We spoke to Greg recently to shed some light on Spring Boot, as well as his latest book Learning Spring Boot 2.0 - Second Edition. Greg on tweets as @gregturn on Twitter. Why you should use Spring Boot Packt: Spring Boot is a popular tool for building production-grade enterprise applications in Spring. What do you think are the 3 notable features of Spring Boot that stands apart from the other tools available out there? Greg Turnquist: The three characteristics I have found interesting are: Simplicity and ease to build new apps Boot's ability to back off when you define custom components, and Boot's ability to respond to community feedback as it constantly adds valued features Packt: You have a solid track record of developing software. What tools do you use on a day-to-day basis? GT: As a member of the Spring team, I regularly use IntelliJ IDEA, Slack, Gmail, Homebrew, various CI tools like CircleCI/TravisCI/Bamboo, Maven/Gradle, and Sublime Text 3. How to start using Spring Boot Packt: For a newbie developer, would you suggest getting started with Spring first, before trying their hand at Spring Boot? Or is Boot so simple to learn that a Java Developer could pick it up straight away and build applications? GT: In this day and age, there is no reason to not start with Spring Boot. The programming model is so simple and elegant that you can have a working web app in five minutes or less. And considering Spring Boot IS Spring, the argument is almost false. Packt: How does the new edition of Learning Spring Boot prepare readers to be industry-ready? For existing Spring and Spring Boot developers, what are the aspects to look forward to in your book? GT: This book contains a wide range of practical examples covering areas such as Web, Data Access, developer tools, Messaging, WebSockets, and Security. By building a realistic example throughout the book on top of Java's de facto standard toolkit, it should be easy to learn valuable lessons needed in today's industry. Additionally, using Project Reactor throughout, the reader will be ready to build truly scalable apps. As the Spring portfolio adopts support from Project Reactor, this is the only book in the entire market focused on that paradigm. Casting all these real world problems in light of such a powerful, scalable toolkit should be eagerly received. I truly believe this book helps bend the curve so that people can get operational, faster, and are able to meet their needs. How well does Spring Boot integrate with JavaScript and JavaScript frameworks? Packt: You also work a bit on JavaScript. Where do you think Spring and Spring Boot support for full-stack development with JS frameworks is going ahead? GT: Spring Boot provides first class support for either dropping in WebJars or self-compiled JavaScript modules, such as with Webpack. The fact that many shops are moving off of Ruby on Rails and onto Spring Boot is the evidence that Boot has it all needed to build strong, powerful apps with full blown front ends to meet the needs of development shops. What does the future hold for Spring Boot? Packt: Where do you see the future of Spring Boot's development going? What changes or improvements can the community expect in future releases? GT: Spring Boot has a dedicated team backing its efforts that at the same time is very respectful of community feedback. Adopting support for reactive programming is one such example that has been in motion for over two years. I think core things like the "Spring way" aren't going anywhere since they are all proven approaches. At the same time, support for an increasing number of 3rd party libraries and more cloud providers will be something to keep an eye on. Part of the excitement is not seeing exactly where things are going as well, so I look forward to the future of Spring Boot along with everyone else. Why you should read Learning Spring Boot Packt: Can you give Developers 3 reasons on why they should pick up your book? Are you interested in the hottest Java toolkit that is out there? Do you want to have fun building apps? And do you want to take a crack at the most revolutionary addition made to the Spring portfolio (Project Reactor)? If you answered yes to any of those, then this book is for you.
Read more
  • 0
  • 0
  • 3571

article-image-listen-we-discuss-why-chaos-engineering-and-observability-will-be-important-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

Listen: We discuss why chaos engineering and observability will be important in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
This week I published a post that explored some of the key trends in software infrastructure that security engineers, SREs, and SysAdmins should be paying attention to in 2019. There was clearly a lot to discuss - which is why I sat down with my colleague Stacy Matthews to discuss some of the topics explored in the post in a little more. Enjoy! https://soundcloud.com/packt-podcasts/why-observability-and-chaos-engineering-will-be-vital-in-2019 What do you think? Is chaos engineering too immature for widespread adoption? And how easy will it be to begin building for observability?
Read more
  • 0
  • 0
  • 3546

article-image-git-like-all-other-version-control-tools-exists-to-solve-for-one-problem-change-joseph-muli-and-alex-magana-interview
Packt Editorial Staff
09 Oct 2018
5 min read
Save for later

“Git, like all other version control tools, exists to solve for one problem: change” - Joseph Muli and Alex Magana [Interview]

Packt Editorial Staff
09 Oct 2018
5 min read
An unreliable versioning tool makes product development a herculean task. Creating and enforcing checks and controls for the introduction, scrutiny, approval, merging, and reversal of changes in your source code, are some effective methods to ensure a secure development environment. Git and GitHub offer constructs that enable teams to conduct version control and collaborative development in an effective manner.  When properly utilized, Git and GitHub promote agility and collaboration across a team, and in doing so, enable teams to focus and deliver on their mandates and goals. We recently interviewed Joseph Muli and Alex Magana, the authors of Introduction to Git and GitHub course. They discussed the various benefits of Git and GitHub while sharing some best practices and myths. Author Bio Joseph Muli loves programming, writing, teaching, gaming, and travelling. Currently, he works as a software engineer at Andela and Fathom, and specializes in DevOps and Site Reliability. Previously, he worked as a software engineer and technical mentor at Moringa School. You can follow him on LinkedIn and Twitter. Alex Magana loves programming, music, adventure, writing, reading, architecture, and is a gastronome at heart. Currently, he works as a software engineer with BBC News and Andela. Previously, he worked as a software engineer with SuperFluid Labs and Insync Solutions. You can follow him on LinkedIn or GitHub. Key Takeaways Securing your source code with version control is effective only when you do it the right way. Understanding the best practices used in version control can make it easier for you to get the most out of Git and GitHub. GitHub is loaded with an elaborate UI. It’ll immensely help your development process to learn how to navigate the GitHub UI and install the octo tree. GitHub is a powerful tool that is equipped with useful features. Exploring the Feature Branch Workflow and other forking features, such as submodules and rebasing, will enable you to make optimum use of the many features of GitHub. The more elaborate the tools, the more time they can consume if you don’t know your way through them. Master the commands for debugging and maintaining a repository, to speed up your software development process. Keep your code updated with the latest changes using CircleCI or TravisCI, the continuous integration tools from GitHub. The struggle isn’t over unless the code is successfully released to production. With GitHub’s release management features, you can learn to complete hiccup-free software releases. Full Interview Why is Git important? What problem is it solving? Git, like all other version control tools, exists to solve for one problem, change. This has been a recurring issue, especially when coordinating work on teams, both locally and distributed, that specifically being an advantage of Git through hubs such as GitHub, BitBucket and Gitlab. The tool was created by Linus Torvalds in 2005 to aid in development and contribution on the Linux Kernel. However, this doesn’t necessarily limit Git to code any product or project that requires or exhibits characteristics such as having multiple contributors, requiring release management and versioning stands to have an improved workflow through Git. This also puts into perspective that there is no standard, it’s advisable to use what best suits your product(s). What other similar solutions or tools are out there? Why is Git better? As mentioned earlier, other tools do exist to aid in version control. There are a lot of factors to consider when choosing a version control system for your organizations, depending on product needs and workflows. Some organizations have in-house versioning tools because it suits their development. Some organizations, for reasons such as privacy and security or support, may look for an integration with third-party and in-house tools. Git primarily exists to provide for a faster and distributed version system, that is not tied to a central repository, hub or project. It is highly scalable and portable. Other VC tools include Apache SubVersion, Mercurial and Concurrent Versions System (CVS). How can Git help developers? Can you list some specific examples (real or imagined) of how it can solve a problem? A simple way to define Git’s indispensability is enabling fast, persistent and accessible storage. This implies that changes to code throughout a product’s life cycle can be viewed and updated on demand, each with simple and compact commands to enable the process. Developers can track changes from multiple contributors, blame introduced bugs and revert where necessary. Git enables multiple workflows that align to practices such as Agile e.g. feature branch workflows and others including forking workflows for distributed contribution, i.e. to open source projects. What are some best tips for using Git and GitHub? These are some of the best practices you should keep in mind while learning or using Git and GitHub. Document everything Utilize the README.MD and wikis Keep simple and concise naming conventions Adopt naming prefixes Correspond a PR and Branch to a ticket or task. Organize and track tasks using issues. Use atomic commits [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘7 tips for using Git and GitHub the right way’.[/box] What are the myths surrounding Git and GitHub? Just as every solution or tool has its own positives and negatives, Git is also surrounded by myths one should be aware of. Some of which are: Git is GitHub Backups are equivalent to version control Git is only suitable for teams To effectively use Git, you need to learn every command to work [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘4 myths about Git and GitHub you should know about’.  [/box] GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitHub introduces ‘Experiments’, a platform to share live demos of their research projects  
Read more
  • 0
  • 0
  • 3517
article-image-what-should-we-watch-tonight-ask-a-robot-says-matt-jones-from-ovo-mobile
Neil Aitken
18 Aug 2018
11 min read
Save for later

What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]

Neil Aitken
18 Aug 2018
11 min read
Netflix, the global poster child for streamed TV and the use of Big Data to inform the programs they develop, has shown steady customer growth for several years now. Recently, the company revealed that it would be shutting down the user reviews which have been so prominent in their media catalogue interface for so long. In the background, media and telco are merging. AT&T, the telco which undertook the biggest deal in history recently, acquired Time and wants HBO to become like Netflix. Telia, a Finnish telecommunications company bought Bonnier Broadcasting in late July 2018. The video content landscape has changed a great deal in the last decade. Everyone in the entertainment game wants to move beyond broadcast TV and to use data to develop content their users will love and which will give their customer base more variety. This means they can look to data to charge higher subscription rates per user, experiment with tiered subscriptions, decide to localize global content, globalize local content and more. These changes raise two key questions. First, are we heading for a world in which AI and ML based algorithms drive what we watch on TV? And second, are the days of human recommendation being quietly replaced by machine recommendations over which the user has no control? [caption id="attachment_21726" align="aligncenter" width="1392"] As you know, Netflix is acquiring customers fast.[/caption] Source: Statista To get an insider’s view on the answer to those questions, I sat down with Matt Jones of OVO Mobile, one of Australia’s fastest growing telecommunications companies. OVO offer their customers a unique point of difference – streaming video sports content, included in a phone plan. OVO has bought the rights to a number of niche sports in Australia which weren’t previously available and now offer free OTA (Over the Air) digital content for fans of ‘unusual’ sports like Drag Racing or Gymnastics. OTA content is anything delivered to a user’s phone over a wireless network. In OVO’s case, the data used to transport the video content they provide to their users is free. That means customers don’t have to worry about paying more for mobile data so they can watch it – a key concern for users. OVO Mobile and Netflix are in very similar businesses – and Matt has a unique point of view about how Artificial Intelligence and Machine Learning will impact the world of telco and media. Key takeaways What’s changed our media consumption habits: the ubiquitous mobile internet, the always on and connected younger generation, better mobile hardware, improved network performance and capabilities, need for control over content choices. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. The use of AI in digital/app experience has changed in a way to personalize what users can see which old media could not offer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to Contribution of AI / ML towards the delivery of online media is endless in terms of personalisation, context awareness, notification management etc. Social acceptance of media delivered to users on mobile phones is what’s driving change A number of overlapping factors are driving changes in how we engage with content. Social acceptance of the internet and mobile access to it as a core part of life is one key enabler. From a technology perspective, things have changed too. Smartphones now have bigger, higher resolution screens than ever before – and they’re with us all the time. Jones believes this change is part of a cultural evolution in how we relate to technology. He says, “There has also been a generational shift which has taken place. Younger people are used to the small screen being the primary device. They’re all about control, seeking out their interests and consuming these, as opposed to previous generations which was used to mass content distribution from traditional channels like TV.” Other factors include network performance and capability which has improved dramatically in recent years. Data speeds have grown exponentially from 3G networks – launched less than 15 years ago, which could support stuttered low resolution video to 4G and 4.5G enabled networks. These can now support live streaming of High Definition TV. Mobile data allowances in plans and offers from some phone companies to provide some content ‘data free’ (as OVO does with theirs) have also driven uptake. Finally, people want convenience and digital offers that in a way people have never experienced before. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. What part can AI / machine learning play in the delivery of media online? Artificial Intelligence (AI) is already part of 85% of our online interactions. Gartner suggest, it will be part of every product in the future. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. When you find a new band in Spotify, when YouTube recommends a funny video you’ll like, when Amazon show you other products that you might like to consider alongside the one you just put in to your basket, that’s AI working to improve your experience. “Over The Top content is exploding. Content owners are going direct to consumer and providing fantastic experiences for their users. What’s changing is the use of AI in digital / app experiences to personalize what users see in ways old media never could.” Says Matt. Matt’s video content recommendation app, for example, ‘learns’ not just what you like to watch but also the times you are most likely to watch it. It then prompts users with a short video to entice them to watch. And the analytics available show just how effective it is. Matt’s app can be up to 5 times more successful at encouraging customers to watch his content, than those who don’t use it. “The list of ways that AI / ML contributes to the delivery of media online is endless. Personalisation, context awareness, notification management …. Endless” By offering users recommendations on content they’ll love, producers can now engage more customers for longer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to: Personalise at volume: Apps used to deliver content can personalise what’s shown first to users, based on a number of variables known about them, including the sort of context awareness that can be relatively easy to find on mobile devices. Ultimately, every AI customer experience improvement (including the examples that follow) are all designed to automate the process of providing something special to each individual that they uniquely want. Automation means that can be done at scale, with every customer treated uniquely. Notification management: AI that tracks the success of notifications and acknowledges, critically, when they are not helpful to the user, can be employed to alert users only about things they want to know. These AI solutions provide updates to users based on their preferences and avoid the provision of irrelevant information. Content discovery & Re- engagement: AI and ML can be used in the provision of recommendations as to what users could watch, which expose customers to content they would not otherwise find, but which they are likely to value. Better / more relevant advertising: Advertising which targets a legitimately held, real, customer need is actually useful to viewers. Better analytics for AI can assist in targeting micro segments with ads which contain information customers will value. Lattice, is a Business Insights tool provider. Their ‘Lattice Engine’ product combined information held in multiple cloud based locations and uses AI to automatically assign customers to a segment which suits them. Those data are then provided to a customer’s eCommerce site and other channel interactions, and used to offer content which will help them convert better. Developing better segments: Raw data on real customers can be gathered from digital content systems to inform Above The Line marketing in the real, non digital world. Big data analytics can now be used with accurate segmentation for local area marketing and to tie together digital and retail customer experiences. McKinsey suggest that 36% of companies are actively pursuing strategies, driven from their Big Data reserves. They advise their clients that Big Data can be used to better understand and grow Customer Lifetime Values. In the future - Deep linking for calls-to-action: Some digital content is provided in a form such that viewers can find out more information about an item on screen. Providing a way to deep link from a video screen in to a shopping cart prepopulated with something just seen on screen is an exciting possibility for the future. Cutting steps out of the buying process to make it easier for eCommerce users to transact from within content apps to buying a product they’ve seen on the screen is likely to become a big business. Deep linking raises the value of the content shown to the degree it raise the sales of the products included. Bringing it all together Jones believes those that invest big in AI and machine learning, and of them, those who find a way to draw out insights and act upon them, will be the ultimate victors. “The big winners are going to be the people who connect a fan with content they love and use AI and ML to deliver the best possible experience. It’s about using all the information you have about your users and acting on them.” Said Jones. That commercial incentive is already driving behavior. AI and ML drive already provide personalized content recommendations. Progressive content companies, including Matt’s, are already working on building AI in to every facet of every Digital experience you have. As to whether AI is entirely replacing social media influence, I don’t think that’s the case. The research says people are still 4 times more likely to watch a video if it is recommended to them by a friend. Reviews have always been important to presales on the internet and that applies to TV shows, too. People want to know what real users felt when they used a product. If they can’t get reviews from Netflix, they will simply open a new tab and google for reviews in that while they are thinking of how to find something to watch on Netflix. About Matt Jones, Matt is an industry disruptor, launching the first of its kind Media and Telco brand OVO Mobile in 2015, Matt is the driving force behind convergence of new media & telco – by bringing together Telecommunications with Media Rights and digital broadcast for mass distribution. OVO is a new type of Telco, delivering content that fans are passionate about, streamed live on their mobile or tablet UNLIMITED & data free. OVO has secured exclusive 3 year+ digital broadcast and distribution rights for a range of content owners including Supercars, World Superbikes, 400 Thunder Drag Series, Audi Australia Racing & Gymnastics Australia – with a combined Australian audience estimated at over 7 Million. OVO is a multi-award winner, including winning the Money Magazine Best of the Best Award 2017 for high usage, as well as featuring on A Current Affair, Sunrise, The Today Show, Channel 7 News, Channel 9 News and multiple radio shows for their world-first kids’ mobile phone plan with built-in cyber security protection. As OVO CEO, Matt was nominated for Start-Up Executive of the Year at the CEO Magazine Awards 2017 and was awarded runner-up. The Award recognises the achievements of leaders and professionals, and the contributions they have made to their companies across industry-specific categories. Matt holds a Bachelor of Arts (BA) from the University of Tasmania and regularly speaks at Telco, Sports Marketing and Media forums and events. Matt has held executive leadership roles at leading Telecommunications brands including Telstra (Head of Strategy – Operations), Optus, Vodafone, AAPT, Telecom New Zealand as well as global Management Consulting firms including BearingPoint. Matt lives on the northern beaches of Sydney with his wife Mel and daughters Charlotte and Lucy. How to earn $1m per year? Hint: Learn machine learning We must change how we think about AI, urge AI founding fathers Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 3488

article-image-prof-rowel-atienza-discusses-the-intuition-behind-deep-learning-techniques-advances-in-gans
Packt Editorial Staff
30 Sep 2019
6 min read
Save for later

Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting-edge AI models

Packt Editorial Staff
30 Sep 2019
6 min read
In recent years, deep learning has made unprecedented progress in vision, speech, natural language processing and understanding, and other areas of data science. Developments in deep learning techniques, including GANs, variational autoencoders and deep reinforcement learning, are creating impressive AI results. For example, DeepMind's AlphaGo Zero became a game changer in AI research when it beat world champions in the game of Go. In this interview, Professor Rowel Atienza, author of the book Advanced Deep Learning with Keras talks about the recent developments in the field of deep learning. This book is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. This book strikes a balance between advanced concepts in deep learning and practical implementations with Keras. Key takeaways from the interview The intuition of deep learning is built on the fact that the deeper the network gets, the more feature representations the network learns in order to solve complex real-world problems. The objective of deep learning is to enable agents to be more robust to unforeseen events and to lessen the dependency on huge data. Advances in GANs enable us to generate high-dimensional fake data such as high-resolution images or videos that look very convincing. Deep learning tackles the curse of dimensionality by finding efficient data structures and layers that could represent complex data in the most efficient manner. The interview in detail What is the intuition behind deep learning? What are the recent developments in deep learning? Rowel Atienza: Deep learning is built on the intuition that the deeper the network gets, the more feature representations the network learns in order to solve complex real-world problems. Unlike machine learning, deep learning learns these features automatically from data in different degrees of supervision. There are many recent developments in deep learning. There are advances on graph neural networks because people are realizing the limits of NLP (Natural Language Processing), CNN (Convolution Neural Networks), and RNN (Recurrent Neural Networks) in representing more complex data structures such as social network, 3D shapes, molecular structures, etc. Implementing the causality in reasoning on data is another area of strong interest. Deep learning is strong on correlation not on discovering causality in data. Meta learning or learning to learn is also another area of interest. The objective is to enable agents to be more robust to unforeseen events and to lessen the dependency on huge data. What are different deep learning techniques to create successful AI? RA: A successful AI is dependent on two things: 1) deep domain knowledge and 2) deep understanding of state of the art techniques that will work on the domain problem. Domain knowledge comes from someone who is very familiar with the domain problem. This person is not necessarily knowledgeable in AI. This domain knowledge is then modelled in AI to automate the process of problem solving. How deep learning tackles the curse of dimensionality? RA: One of the goals of deep learning is to keep on finding efficient data structures and layers that could represent complex data in the most efficient manner. For example, geometric deep learning is able to circumvent the limitations of representing and learning from 3D data by avoiding inefficient 3D convolutions. There is still so much to be done in this space. What is autoencoders? What is the need of autoencoders in deep learning? How do you create an autoencoder? RA: Autoencoders compress high dimensionality data into low dimensionality code without losing important information. Low-dimensional code is suitable for further processing by other deep learning models such as in generative models like GANs and VAEs. Autoencoder can easily be implemented using two networks, an encoder and decoder. The depth, width, and type of layers are dependent on the original data to be encoded. Why are GANs so innovative? RA: GANs are innovative since they are good in generating fake data that look real. It is something that is hard to accomplish using other generative models. The advances in GANs enable us to generate high-dimensional fake data such as high resolution image or video that look very convincing. Tell us a little bit about this book? What makes this book necessary? What gap does it fill? RA: Advanced Deep Learning with Keras focuses on recent advances on deep learning It starts with a quick review of deep learning concepts (NLP, CNN, RNN). The discussions on deep neural networks, autoencoders, generative adversarial network (GAN), variational autoencoders (VAE), and deep reinforcement learning (DRL) follow. The book is important for everyone who would like to understand advanced concepts on deep learning and their corresponding implementation in Keras. The current version has in depth focus on generative models (autoencoders, GANs, VAEs) that could be used in-practical setting. The DRL explains the core concepts of value-based and policy-based methods in reinforcement learning and the corresponding working implementations in Keras which are difficult to make them right. About the Book Advanced Deep Learning with Keras is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. Using Keras as an open-source deep learning library, you'll find hands-on projects throughout that show you how to create more effective AI with the latest techniques. About the Author Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution to the field of active gaze tracking for human-robot interaction. Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications
Read more
  • 0
  • 0
  • 3423

article-image-developers-need-to-say-no-elliot-alderson-on-the-faceapp-controversy-in-a-bonus-podcast-episode-podcast
Richard Gall
12 Aug 2019
5 min read
Save for later

"Developers need to say no" - Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast]

Richard Gall
12 Aug 2019
5 min read
Last month there was a huge furore around FaceApp, the mobile application that ages your photographs to show you what you might look like as you get older. This was caused by a rapid cycle of misinformation and conjecture. It was thanks to cybersecurity researcher Elliot Alderson - who you might remember from last week's podcast episode - that the world was able to get beyond speculation and find out what was really going on. We got in touch with Elliot shortly after the story broke. He was kind enough to speak to us about the FaceApp furore, and explained what caused the confusion and how he managed to get to the bottom of what was actually going on. You can listen to what he had to say in this special short bonus episode: https://soundcloud.com/packt-podcasts/bonus-security-researcher-elliot-alderson-on-the-faceapp-furore   Elliot says that although FaceApp is problematic, it isn't unique. It poses exactly the same threat to our privacy as the platforms and applications that millions of people use every day. "There is an issue with FaceApp, he tells us. "But there is an issue with Facebook, with SnapChat, with Twitter - it's never a good idea for someone to upload a photo of your face to a random application." This line of argument can be found elsewhere. Arguably the most important lesson we can learn. In this article from Wired, journalist Brian Barrett writes "should you be worried about FaceApp? Sure. But not necessarily more than any other app you let into your photo library." Should you use FaceApp? However, although you might assume that a security professional would simply warn everyone against using these sorts of applications, Elliot says "this application is really trendy. You can see a lot of stars using it on social media, so this is normal - you want to use this application." What you need to consider if want to use FaceApp However, if you do want to use it, you should be careful. "You have to step back a little bit before using it and ask yourself a question" about how money is being made. "this is a free application... there are developers behind this application, they need to live, they need to eat, they need to live, they need to eat - they need to earn money - and in general the answer is with your data." "You are the information." Elliot says. "You can decide to use it, and say okay, I'm ready to lose this part of my privacy in order to use this cool service... or you will... think no, it's not worth it. FaceApp seems to be cool, but my privacy is more important than something trendy like this." The key, then, is to check the terms and conditions of the application. "You have to know that you will have lost a part of your privacy, And if you're okay with that then - okay, go for it, and use the application." "Developers need to say no sometimes." Developer responsibility and code ethics There are clearly question marks for users about FaceApp, or, indeed, any other free application that has access to your data. But what about the developers building these applications? Do they have a part to play in ensuring that applications respect user consent and privacy? "It's complicated for a developer to say no to their project manager" says Elliot. However, this doesn't mean developers should be content to follow orders from management. "Developers need to raise their level... and say okay, but ethics is also important..." Elliot continues, "as a technical guy I need to spread the message internally in my company, and say to the project manager, to the business, to the marketing department okay this is a cool feature but no, we won't do that because this is against our user'." "Developers need to say no sometimes - and companies need to understand that it's not okay to dump as much data as possible from their users." How did Elliot Alderson uncover the truth about FaceApp? One thing that is often forgotten in these stories are the technical processes through which the truth is uncovered. Sure, that might be a little dry or complicated for some, but the fact that there is real detective work in understanding what's actually going on inside an application is incredibly interesting. It also highlights that while software might sometimes appear mysterious or even impenetrable, with the right skills and tools we can see how things actually work. That's not only useful from a technical perspective, it's also a way for all of us to retrieve a small sense of power back from applications built and owned by companies worth billions of dollars. "It's not that easy, but it's not super complicated too," says Elliot. Although he tells us that "the first time you want to do it you need to spend some time on it for sure," once you're set up and ready to go you can find things out remarkably fast. Using a tool called Burp Suite, the whole process was complete in a matter of moments. "Checking FaceApp took literally 5 minutes for me, because everything is already set up on my computer and I just have to install the application and look at the network request." Learn more about Burp Suite with Packt's selection of eBooks and videos here. Follow Elliot Alderson on Twitter: @fs0c131y
Read more
  • 0
  • 0
  • 3369
article-image-why-asp-dotnet-makes-building-mobile-web-apps-easy-interview-jason-de-oliveira
Packt
20 Feb 2018
6 min read
Save for later

Why ASP.NET makes building apps for mobile and web easy - Interview with Jason de Oliveira

Packt
20 Feb 2018
6 min read
Jason De Oliveira works as a CTO for MEGA International, a software company in Paris (France), providing modeling tools for business transformation, enterprise architecture, and enterprise governance, risk, and compliance management. He is an experienced manager and senior solutions architect, with high skills in software architecture and enterprise architecture. He has been awarded him MVP C#/.NET by Microsoft for over 6 years for his numerous contributions to the Microsoft community. In this interview, Jason talks about the newly introduced features of .NET Core 2.0 and how they empower effective cross-platform application development. He also gives us a sneak-peek of his recently released book Learning ASP.NET Core 2.0 Packt: Let's start with a very basic question. What is .NET Core? How is it different from the .NET Framework? Jason De Oliveira: In the last 20 years, Microsoft has focused mainly on Windows and built many technologies around it. You can see that easily when looking at the ASP.NET and the .NET Framework in general. They provide a very good integration and extend the operating system for building great desktop and web applications, but only on the Windows platform. In the last 5 years, we have seen some major changes in the overall strategy and vision of Microsoft due to major changes in the IT market. The cloud-first as well as the mobile-first strategy coupled with the support for other operating systems has lead Microsoft to completely rethink most of the existing frameworks - including the .NET Framework. That is one of the major reasons why the .NET Core framework came into existence. It incorporates a new way of building applications, fully embracing multi-platform development and the latest standards and technologies - and what a great framework it has turned out to be! ASP .Net Core 2.0 was recently announced in August 2017. What are the new features and updates introduced in this release to make the development of web apps easier? The latest version of ASP.NET Core is 2.0, which, provides much better performance than the other versions before it. It has been open-sourced, so developers can understand how it works internally and adapt it easily to specific needs if necessary. Furthermore, the integration between .NET Core and Visual Studio has been improved. Some other important features of this new release are: The Meta-Package which includes everything necessary to develop great web applications has been added to the framework An improved NuGet compatibility has been added. This leads to much better developer productivity and efficiency. Web development can be fun and using .NET Core in conjunction with the latest version of Visual Studio proves it! What benefits does Entity Framework Core 2 offer while building a robust MVC web app? When you are building web applications, you need to store your data somewhere. Today, most of the time the data is stored in relational databases (Microsoft SQL Server, Oracle and so on). While there are multiple ways of connecting to a database using ASP.NET Core, we advise using Entity Framework Core 2, because of its simplicity and because it contains all necessary features already built-in. Another big advantage is that it abstracts the database structure from the object-oriented structure within your code (also commonly called ORM). It furthermore supports nearly all databases you can find on the market either directly or via additional providers. When it comes to .NET Core, do you think there is any scope for improvement? What can Microsoft do in the future to improve the suite? To be honest ASP.NET Core 2.0 has achieved a very high level of feature coverage. Nearly everything you can think of is included by default, which is quite remarkable. Nevertheless, Microsoft has already shipped an updated version called ASP.NET Core 2.1 and we can expect that it will further support and evolve the framework. However, an area of improvement could be Artificial Intelligence (AI). As you might know, Microsoft is currently investing very strongly in this area and we think that we might see some features getting included with the next versions of ASP.NET Core. Also in terms of testability of code and especially live unit testing, we hope to see some improvements in the future. Unit tests are important for building high quality application with less bugs, so having a thorough integration of ASP.NET Core with the Visual Studio Testing tools would be a big advantage for any developer. Tell us something about your book. What makes it unique? With this book, you will learn and have fun at the same time. The book is very easy to read, while containing basic and advanced concepts of ASP.NET Core 2.0 web development. It is not only about the development, though. You will also see how to apply agile methodologies and use tools such as Visual Studio and Visual Studio Code. The examples in the book are real world examples, which have been added to build a real application at the end. Not just stripped down sample examples, but instead examples that you can adapt to your own application needs quickly. From the start to the end of the book we have applied a Minimum Viable Product (MVP) approach, meaning that at the end of each chapter you will have evolved the overall sample application a little bit more. This is motivating and interesting and will keep you hooked. You will see how to work in different environments such as on-premises and in the cloud. We explain how to deploy, manage and supervise your ASP.NET Core 2.0 web application in Microsoft Azure, Amazon and Docker, for example. For someone new to web development, what learning path in terms of technologies would you recommend? What is the choice of tools he/she should make to build the best possible web applications? Where does .NET Core 2.0 fit into the picture? There are different types of web developers, who could take potentially different paths. Some will start with the graphical user interface and be more interested in the graphical representation of a web application. They should start with HTML5 and CSS3 and maybe use JavaScript to handle the communication with the server. Then, there are API developers, who are not very interested in graphical representation but more in building efficient API and web services, which allow reducing bandwidth and providing good performance. In our book, we show the readers how to build views and controllers, while using the latest frontend technologies to provide a modern look and feel. We then explain how to use the built-in features of ASP.NET Core 2.0 for building Web APIs and how to connect them via Entity Framework 2 with the database or provide a whole host of other services (such as sending emails, for example).
Read more
  • 0
  • 0
  • 3334

article-image-unlocking-the-secrets-of-microsoft-power-bi-interview-part-2-of-2-with-brett-powell-founder-of-frontline-analytics-llc
Amey Varangaonkar
10 Oct 2017
12 min read
Save for later

Unlocking the secrets of Microsoft Power BI

Amey Varangaonkar
10 Oct 2017
12 min read
[dropcap]S[/dropcap]elf-service Business Intelligence is the buzzword everyone's talking about today. It gives modern business users the ability to find unique insights from their data without any hassle. Amidst a myriad of BI tools and platforms out there in the market, Microsoft’s Power BI has emerged as a powerful, all-encompassing BI solution - empowering users to tailor and manage Business Intelligence to suit their unique needs and scenarios. [author title="Brett Powell"]A Microsoft Power BI partner, and the founder and owner of Frontline Analytics LLC., a BI and analytics research and consulting firm. Brett has contributed to the design and development of Microsoft BI stack and Power BI solutions of diverse scale and complexity across the retail, manufacturing, financial, and services industries. He regularly blogs about the latest happenings in Microsoft BI and Power BI features at Insight Quest. He is also an organizer of the Boston BI User Group.[/author]   In this two part interview Brett talks about his new book, Microsoft Power BI Cookbook, and shares his insights and expertise in the area of BI and data analytics with a partciular focus on Power BI. In part one of the interview, Brett shared his views on topics ranging from what it takes to be successful in the field of BI & data analytics to why he thinks Microsoft is going to lead the way in shaping the future of the BI landscape. Today in part two, he shares his expertise with us on the unique features that differentiate Power BI from other tools and platforms in the BI space. Key Takeaways Ease of deployment across multiple platforms, efficient data-driven insights, ease of use and support for a data-driven corporate culture is what defines an ideal Business Intelligence solution for enterprises. Power BI leads in self-service BI because it’s the first Software as a Service (SaaS) platform to offer ‘End User BI’ in which anyone, not just a business analyst, can leverage powerful tools to obtain greater value from data. Microsoft Power BI has been identified as a leader in Gartner’s Magic Quadrant for BI and Analytics platforms, and provides a visually rich and easy to access interface that modern business users require. You can isolate report authoring from dataset development in Power BI, or quickly scale up or down a Power BI dataset as per your needs. Power BI is much more than just a tool for reports and dashboards. With a thorough understanding of the query and analytical engines of Power BI, users can customize more powerful and sustainable BI solutions. Part Two: Interview Excerpts - Power BI from a Worm’s Eye View How long have you been a Microsoft Power BI user? How have you been using Power BI on a day-to-day basis? What other tools do you generally end up using alongside Power BI for your work? I’ve been using Power BI from the beginning when it was merely an add-in for Excel 2010. Back then, there was no cloud service and Microsoft BI was significantly tethered to SharePoint but the fundamentals of the Tabular data modelling engine and programming language of DAX was available in Excel to build personal and team solutions. On a day-to-day basis I regularly work with Power BI datasets – that is, the analytical data models inside of Power BI Desktop files. I also work with Power BI report authoring and visualization features and with various data sources for Power BI such as SQL Server. From Learning to Mastering Power BI For someone just starting out using Power BI, what would your recommended learning plan be? For existing users, what does the road to mastering Microsoft Power BI look like? When you’re just starting out I’d recommend learning the essentials of the Power BI architecture and how the components (Power BI service, Power BI Desktop, On-Premises Data Gateway, Power BI Mobile, etc) work together. A sound knowledge on the differences between datasets, reports, and dashboards is essential and an understanding of app workspaces and apps is strongly recommended as this is the future of Power BI content management and distribution. In terms of a learning path you should consider what your role will be on Power BI projects – will you be administering Power BI, creating reports and dashboards, or building and managing datasets? Each of these roles has their own skills, technologies and processes to learn. For example, if you’re going to be designing datasets, a solid understanding of the DAX language and filter context is essential and knowledge of M queries and data access is very important as well. The road to mastering Power BI, in my view, involves a deep understanding of both the M and DAX languages in addition to knowledge of Power BI’s content management, delivery, and administration processes and features. You need to be able to contribute to the full lifecycle of Power BI projects and help guide the adoption of Power BI across an organization. The most difficult or ‘tricky’ aspect of Power BI is thinking of M and DAX functions and patterns in the context of DirectQuery and Import mode datasets. For example, certain code or design patterns which are perfectly appropriate for import models are not suitable for DirectQuery models. A deep understanding of the tradeoffs and use cases for DirectQuery versus default Import (in-memory) mode and the ability to design datasets accordingly is a top characteristic of a Power BI master. 5+ interesting things (you probably didn’t know) about Power BI What are some things that users may not have known about Power BI or what it could do? Can readers look forward to learning to do some of them from your upcoming book: Microsoft Power BI Cookbook? The great majority of learning tutorials and documentation on Power BI involves the graphical interfaces that help you get started with Power BI. Likewise, when most people think of Power BI they almost exclusively think of data visualizations in reports and dashboards – they don’t think of the data layer. While these features are great and professional Power BI developers can take advantage of them, the more powerful and sustainable Power BI solutions require some level of customization and can only be delivered via knowledge of the query and analytical engines of Power BI. Readers of the Power BI Cookbook can look forward to a broad mix of relatively simple to implement tips on usability such as providing an intuitive Fields list for users to more complex yet powerful examples of data transformations, embedded analytics, and dynamic filter behaviours such as with Row-level security models. Each chapter contains granular details on core Power BI features but also highlights synergies available by integrating features within a solution such as taking advantage of an M query expression, a SQL statement, or a DAX metric in the context of a report or dashboard. What are the 3 most striking features that make you love to work with Power BI? What are 3 aspects you would like improved? The most striking feature for me is the ability to isolate report authoring from dataset development. With Power BI you can easily implement a change to a dataset such as a new metric and many report authors can then leverage that change in their visualizations and dashboards as their reports are connected to the published version of the dataset in the Power BI service. A second striking feature is the ‘Query Folding’ of the M query engine. I can write or enhance an M query such that a SQL statement is generated to take advantage of the data source system’s query processing resources. A third striking feature is the ability to quickly scale up or down a Power BI dataset via the dedicated hardware available with Power BI Premium. With Power BI Premium, free users (users without a Pro License) are now able to access Power BI reports and dashboards. The three aspects I’d like to see improved include the following: Currently we don’t have IntelliSense and other common development features when writing M queries. Currently we don’t have display folders for Power BI datasets thus we have to work around this with larger, more complex datasets to maintain a simple user interface. Currently we don’t have Perspectives, a feature of SSAS, that would allow us to define a view of a Power BI dataset such that users don’t see other parts of a data model not relevant to their needs. Is the latest Microsoft Power BI update a significant improvement over the previous version? Any specific new features you’d like to highlight? Absolutely. The September update included a Drillthrough feature that, if configured correctly, enables users to quickly access the crucial details associated with values on their reports such as an individual vendor or a product. Additionally, there was a significant update to Report Themes which provides organizations with more control to define standard, consistent report formatting. Drillthrough is so important that an example of this feature was added to the Power BI Cookbook. Additionally, Power BI usage reporting including the identity of the individual user accessing Power BI content was recently released and this too was included in the Power BI Cookbook. Finally, I believe the new Ribbon Chart will be used extensively as a superior alternative to stacked column charts. Can you tell us a little about the new 'time storyteller custom visual' feature in Power BI? The Timeline Storyteller custom visual was developed by the Storytelling with Data group within Microsoft Research. Though it’s available for inclusion in Power BI reports via the Office Store like other custom visuals, it’s more like a storytelling design environment than a single visual given its extensive configuration options for timeline representations, scales, layouts, filtering and annotations. Like the inherent advantages of geospatial visuals, the linking of Visio diagrams with related Power BI datasets can intuitively call out bottlenecks and otherwise difficult-to-detect relationships within processes. 7 reasons to choose Power BI for building enterprise BI solutions Where does Power BI fall within Microsoft's mission to empower every person and every organization on the planet to achieve more of 1. Bringing people together 2. Living smarter 3. Friction free creativity 4. Fluid mobility? Power BI Desktop is available for free and is enhanced each month with features that empower the user to do more and which remove technical obstacles. Similarly, with no knowledge whatsoever of the underlying technology or solution, a business user can access a Power BI app on their phone or PC and easily view and interact with data relevant to their role. Importantly for business analysts and information workers, Power BI acknowledges the scarcity of BI and analytics resources (ie data scientists, BI developers) and thus provides both graphical interfaces as well as full programming capabilities right into Power BI Desktop. This makes it feasible and often painless to quickly create a working, valuable solution with relatively little experience with the product. We can expect Power BI to support 10GB (and then larger) datasets soon as well as improve its ‘data storytelling’ capabilities with a feature called Bookmarks. In effect, Bookmarks will allow Power BI reports to become like PowerPoint presentations with animation. Organizations will also have greater control over how they utilize the v-Cores they purchase as part of Power BI Premium. This will make scaling Power BI deployments easier and more flexible. I’m personally most interested in the incremental refresh feature identified on the Power BI Premium Roadmap. Currently an entire Power BI dataset (in import mode) is refreshed and this is a primary barrier to deploying larger Power BI datasets. Additionally (though not exclusively by any means), the ability to ‘write’ from Power BI to source applications is also a highly anticipated feature on the Power BI Roadmap. How does your book, Microsoft Power BI Cookbook, prepare its readers to be industry ready? What are the key takeaways for readers from this book? Power BI is built with proven, industry leading BI technologies and architectures such as in-memory, columnar compressed data stores and functional query and analytical programming languages. Readers of the Power BI Cookbook will likely be able to quickly deliver fresh solutions or propose ideas for enhancements to existing Power BI projects. Additionally, particularly for BI developers, the skills and techniques demonstrated in the Power BI Cookbook will generally be applicable across the Microsoft BI stack such as in SQL Server Analysis Services Tabular projects and the Power BI Report Server. A primary takeaway from this book is that Power BI is much more than a report authoring or visualization tool. The data transformation and modelling capabilities of Power BI, particularly combined with Power BI Premium capacity and licensing considerations, are robust and scalable. Readers will quickly learn that though certain Power BI features are available in Excel and though Excel can be an important part of Power BI solutions from a BI consumption standpoint, there are massive advantages of Power BI relative to Excel. Therefore, almost all PowerPivot and Power Query for Excel content can and should be migrated to Power BI Desktop. An additional takeaway is the breadth of project types and scenarios that Power BI can support. You can design a corporate BI solution with a Power BI dataset to support hundreds of users across multiple teams but you can also build a tightly focused solution such as monitoring system resources or documenting the contents of a dataset. If you enjoyed this interview, check out Brett’s latest book, Microsoft Power BI Cookbook. Also, read part one of the interview here to see how and where Power BI fits into the BI landscape and what it takes to stay successful in this industry.
Read more
  • 0
  • 0
  • 3297