Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts

122 Articles
article-image-listen-we-discuss-why-chaos-engineering-and-observability-will-be-important-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

Listen: We discuss why chaos engineering and observability will be important in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
This week I published a post that explored some of the key trends in software infrastructure that security engineers, SREs, and SysAdmins should be paying attention to in 2019. There was clearly a lot to discuss - which is why I sat down with my colleague Stacy Matthews to discuss some of the topics explored in the post in a little more. Enjoy! https://soundcloud.com/packt-podcasts/why-observability-and-chaos-engineering-will-be-vital-in-2019 What do you think? Is chaos engineering too immature for widespread adoption? And how easy will it be to begin building for observability?
Read more
  • 0
  • 0
  • 4379

article-image-why-asp-dotnet-makes-building-mobile-web-apps-easy-interview-jason-de-oliveira
Packt
20 Feb 2018
6 min read
Save for later

Why ASP.NET makes building apps for mobile and web easy - Interview with Jason de Oliveira

Packt
20 Feb 2018
6 min read
Jason De Oliveira works as a CTO for MEGA International, a software company in Paris (France), providing modeling tools for business transformation, enterprise architecture, and enterprise governance, risk, and compliance management. He is an experienced manager and senior solutions architect, with high skills in software architecture and enterprise architecture. He has been awarded him MVP C#/.NET by Microsoft for over 6 years for his numerous contributions to the Microsoft community. In this interview, Jason talks about the newly introduced features of .NET Core 2.0 and how they empower effective cross-platform application development. He also gives us a sneak-peek of his recently released book Learning ASP.NET Core 2.0 Packt: Let's start with a very basic question. What is .NET Core? How is it different from the .NET Framework? Jason De Oliveira: In the last 20 years, Microsoft has focused mainly on Windows and built many technologies around it. You can see that easily when looking at the ASP.NET and the .NET Framework in general. They provide a very good integration and extend the operating system for building great desktop and web applications, but only on the Windows platform. In the last 5 years, we have seen some major changes in the overall strategy and vision of Microsoft due to major changes in the IT market. The cloud-first as well as the mobile-first strategy coupled with the support for other operating systems has lead Microsoft to completely rethink most of the existing frameworks - including the .NET Framework. That is one of the major reasons why the .NET Core framework came into existence. It incorporates a new way of building applications, fully embracing multi-platform development and the latest standards and technologies - and what a great framework it has turned out to be! ASP .Net Core 2.0 was recently announced in August 2017. What are the new features and updates introduced in this release to make the development of web apps easier? The latest version of ASP.NET Core is 2.0, which, provides much better performance than the other versions before it. It has been open-sourced, so developers can understand how it works internally and adapt it easily to specific needs if necessary. Furthermore, the integration between .NET Core and Visual Studio has been improved. Some other important features of this new release are: The Meta-Package which includes everything necessary to develop great web applications has been added to the framework An improved NuGet compatibility has been added. This leads to much better developer productivity and efficiency. Web development can be fun and using .NET Core in conjunction with the latest version of Visual Studio proves it! What benefits does Entity Framework Core 2 offer while building a robust MVC web app? When you are building web applications, you need to store your data somewhere. Today, most of the time the data is stored in relational databases (Microsoft SQL Server, Oracle and so on). While there are multiple ways of connecting to a database using ASP.NET Core, we advise using Entity Framework Core 2, because of its simplicity and because it contains all necessary features already built-in. Another big advantage is that it abstracts the database structure from the object-oriented structure within your code (also commonly called ORM). It furthermore supports nearly all databases you can find on the market either directly or via additional providers. When it comes to .NET Core, do you think there is any scope for improvement? What can Microsoft do in the future to improve the suite? To be honest ASP.NET Core 2.0 has achieved a very high level of feature coverage. Nearly everything you can think of is included by default, which is quite remarkable. Nevertheless, Microsoft has already shipped an updated version called ASP.NET Core 2.1 and we can expect that it will further support and evolve the framework. However, an area of improvement could be Artificial Intelligence (AI). As you might know, Microsoft is currently investing very strongly in this area and we think that we might see some features getting included with the next versions of ASP.NET Core. Also in terms of testability of code and especially live unit testing, we hope to see some improvements in the future. Unit tests are important for building high quality application with less bugs, so having a thorough integration of ASP.NET Core with the Visual Studio Testing tools would be a big advantage for any developer. Tell us something about your book. What makes it unique? With this book, you will learn and have fun at the same time. The book is very easy to read, while containing basic and advanced concepts of ASP.NET Core 2.0 web development. It is not only about the development, though. You will also see how to apply agile methodologies and use tools such as Visual Studio and Visual Studio Code. The examples in the book are real world examples, which have been added to build a real application at the end. Not just stripped down sample examples, but instead examples that you can adapt to your own application needs quickly. From the start to the end of the book we have applied a Minimum Viable Product (MVP) approach, meaning that at the end of each chapter you will have evolved the overall sample application a little bit more. This is motivating and interesting and will keep you hooked. You will see how to work in different environments such as on-premises and in the cloud. We explain how to deploy, manage and supervise your ASP.NET Core 2.0 web application in Microsoft Azure, Amazon and Docker, for example. For someone new to web development, what learning path in terms of technologies would you recommend? What is the choice of tools he/she should make to build the best possible web applications? Where does .NET Core 2.0 fit into the picture? There are different types of web developers, who could take potentially different paths. Some will start with the graphical user interface and be more interested in the graphical representation of a web application. They should start with HTML5 and CSS3 and maybe use JavaScript to handle the communication with the server. Then, there are API developers, who are not very interested in graphical representation but more in building efficient API and web services, which allow reducing bandwidth and providing good performance. In our book, we show the readers how to build views and controllers, while using the latest frontend technologies to provide a modern look and feel. We then explain how to use the built-in features of ASP.NET Core 2.0 for building Web APIs and how to connect them via Entity Framework 2 with the database or provide a whole host of other services (such as sending emails, for example).
Read more
  • 0
  • 0
  • 4259

article-image-how-to-face-a-critical-rag-driven-generative-ai-challenge
Mr. Denis Rothman
06 Nov 2024
15 min read
Save for later

How to Face a Critical RAG-driven Generative AI Challenge

Mr. Denis Rothman
06 Nov 2024
15 min read
This article is an excerpt from the book, "RAG-Driven Generative AI", by Denis Rothman. Explore the transformative potential of RAG-driven LLMs, computer vision, and generative AI with this comprehensive guide, from basics to building a complex RAG pipeline.IntroductionOn a bright Monday morning, Dakota sits down to get to work and is called by the CEO of their software company, who looks quite worried. An important fire department needs a conversational AI agent to train hundreds of rookie firefighters nationwide on drone technology. The CEO looks dismayed because the data provided is spread over many websites around the country. Worse, the management of the fire department is coming over at 2 PM to see a demonstration to decide whether to work with Dakata’s company or not. Dakota is smiling. The CEO is puzzled.  Dakota explains that the AI team can put a prototype together in a few hours and be more than ready by 2 PM and get to work. The strategy is to divide the AI team into three sub-teams that will work in parallel on three pipelines based on the reference Deep Lake, LlamaIndex and OpenAI RAG program* they had tested and approved a few weeks back.  Pipeline 1: Collecting and preparing the documents provided by the fire department for this Proof of Concept(POC). Pipeline 2: Creating and populating a Deep Lake vector store with the first batch of documents while the Pipeline 1 team continues to retrieve and prepare the documents. Pipeline 3: Indexed-based RAG with LlamaIndex’s integrated OpenAI LLM performed on the first batch of vectorized documents. The team gets to work at around 9:30 AM after devising their strategy. The Pipeline 1 team begins by fetching and cleaning a batch of documents. They run Python functions to remove punctuation except for periods and noisy references within the content. Leveraging the automated functions they already have through the educational program, the result is satisfactory.  By 10 AM, the Pipeline 2 team sees the first batch of documents appear on their file server. They run the code they got from the RAG program* to create a Deep Lake vector store and seamlessly populate it with an OpenAI embedding model, as shown in the following excerpt: from llama_index.core import StorageContext vector_store_path = "hub://denis76/drone_v2" dataset_path = "hub://denis76/drone_v2" # overwrite=True will overwrite dataset, False will append it vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=True)  Note that the path of the dataset points to the online Deep Lake vector store. The fact that the vector store is serverless is a huge advantage because there is no need to manage its size, storage process and just begin to populate it in a few seconds! Also, to process the first batch of documents, overwrite=True, will force the system to write the initial data. Then, starting the second batch,  the Pipeline 2 team can run overwrite=False, to append the following documents. Finally,  LlamaIndex automatically creates a vector store index: storage_context = StorageContext.from_defaults(vector_store=vector_store) # Create an index over the documents index = VectorStoreIndex.from_documents(documents, storage_context=storage_context) By 10:30 AM, the Pipeline 3 team can visualize the vectorized(embedded) dataset in their Deep Lake vector store. They create a LlamaIndex query engine on the dataset: from llama_index.core import VectorStoreIndex vector_store_index = VectorStoreIndex.from_documents(documents) … vector_query_engine = vector_store_index.as_query_engine(similarity_top_k=k, temperature=temp, num_output=mt) Note that the OpenAI Large Language Model is seamlessly integrated into LlamaIndex with the following parameters: k, in this case, k=3, specifies the number of documents to retrieve from the vector store. The retrieval is based on the similarity of embedded user inputs and embedded vectors within the dataset. temp, in this case temp=0.1, determines the randomness of the output. A low value such as 0.1 forces the similarity search to be precise. A higher value would allow for more diverse responses, which we do not want for this technological conversational agent. mt, in this case, mt=1024, determines the maximum number of tokens in the output. A cosine similarity function was added to make sure that the outputs matched the sample user inputs: from sentence_transformers import SentenceTransformer model = SentenceTransformer('all-MiniLM-L6-v2') def calculate_cosine_similarity_with_embeddings(text1, text2):     embeddings1 = model.encode(text1)     embeddings2 = model.encode(text2)     similarity = cosine_similarity([embeddings1], [embeddings2])     return similarity[0][0] By 11:00 AM, all three pipeline teams are warmed up and ready to go full throttle! While the Pipeline 2 team was creating the vector store and populating it with the first batch of documents, the Pipeline 1 team prepared the next several batches. At 11:00 AM, Dakota gave the green light to run all three pipelines simultaneously. Within a few minutes, the whole RAG-driven generative AI system was humming like a beehive! By 1:00 PM, Dakota and the three pipeline teams were working on a PowerPoint slideshow with a copilot. Within a few minutes, it was automatically generated based on their scenario. At 1:30 PM, they had time to grab a quick lunch. At 2:00 pm, the fire department management, Dakota’s team, and the CEO of their software company were in the meeting room.  Dakota’s team ran the PowerPoint slide show and began the demonstration with a simple input:  user_input="Explain how drones employ real-time image processing and machine learning algorithms to accurately detect events in various environmental conditions." The response displayed was satisfactory: Drones utilize real-time image processing and machine learning algorithms to accurately detect events in various environmental conditions by analyzing data captured by their sensors and cameras. This technology allows drones to process visual information quickly and efficiently, enabling them to identify specific objects, patterns, or changes in the environment in real-time. By employing these advanced algorithms, drones can effectively monitor and respond to different situations, such as wildfires, wildlife surveys, disaster relief efforts, and agricultural monitoring with precision and accuracy. Dakota’s team then showed that the program could track and display the original documents the response was based on. At one point, the fire department’s top manager, Taylor, exclaimed, “Wow, this is impressive! It’s exactly what we were looking for! " Of course, Dakato’s CEO began discussing the number of users, cost, and timelines with Taylor. In the meantime, Dakota and the rest of the fire department’s team went out to drink some coffee and get to know each other. Fire departments intervene at short notice efficiently for emergencies. So can expert-level AI teams! https://github.com/Denis2054/RAG-Driven-Generative-AI/blob/main/Chapter03/Deep_Lake_LlamaIndex_OpenAI_RAG.ipynb ConclusionIn facing a high-stakes, time-sensitive challenge, Dakota and their AI team demonstrated the power and efficiency of RAG-driven generative AI. By leveraging a structured, multi-pipeline approach with tools like Deep Lake, LlamaIndex, and OpenAI’s advanced models, the team was able to integrate scattered data sources quickly and effectively, delivering a sophisticated, real-time conversational AI prototype tailored for firefighter training on drone technology. Their success showcases how expert planning, resourceful use of AI tools, and teamwork can transform a complex project into a streamlined solution that meets client needs. This case underscores the potential of generative AI to create responsive, practical solutions for critical industries, setting a new standard for rapid, high-quality AI deployment in real-world applications.Author Bio Denis Rothman graduated from Sorbonne University and Paris-Diderot University, and as a student, he wrote and registered a patent for one of the earliest word2vector embeddings and word piece tokenization solutions. He started a company focused on deploying AI and went on to author one of the first AI cognitive NLP chatbots, applied as a language teaching tool for Mo�t et Chandon (part of LVMH) and more. Denis rapidly became an expert in explainable AI, incorporating interpretable, acceptance-based explanation data and interfaces into solutions implemented for major corporate projects in the aerospace, apparel, and supply chain sectors. His core belief is that you only really know something once you have taught somebody how to do it.
Read more
  • 0
  • 0
  • 4224

article-image-technology-opens-up-so-many-doors-an-interview-with-sharon-kaur-from-school-of-code
Packt
14 Feb 2018
5 min read
Save for later

"Technology opens up so many doors" - An Interview with Sharon Kaur from School of Code

Packt
14 Feb 2018
5 min read
School of Code is a company on a mission to help more people benefit from technology. It has created an online multiplayer platform that aims to make coding fun, simple and accessible to all. This platform has been used by over 120,000 people since its launch in December 2016, and School of Code recently won the ‘Transforming Lives’ award at the 2017 Education Awards. The company was founded by Chris Meah while he was completing his PhD in Computer Science at the University of Birmingham.  As headline sponsors, Packt founder and CEO Dave Maclean shares his thoughts on the programme. “The number and diversity of the applicants proves how many people in Birmingham are looking to learn key skills like HTML, CSS, Javascript and Node.JS. Packt is excited to sponsor School of Code’s Bootcamp participants to increase the population of skilled developers in the West Midlands, which will have an impact on the growth of innovative start-ups in this region.” We spoke to Sharon Kaur, who's been involved with a School of Code bootcamp about her experience and for her perspective on tech in 2018. Packt: Hi Sharon! Tell us a little about yourself. Sharon Kaur: My name is Sharon. I am a choreographer and dancer for international music groups. I am also an engineering and technology advocate and STEM Ambassador for the UK and India – my main aim is getting more young girls and ethnic minorities interested in and pursuing a career in science, technology and engineering. What were you doing before you enrolled for School of Code and what made you want to sign up? I previously studied my BEng honours and MSc degrees at University of Surrey, in general and medical engineering. I worked in the STEM education industry for a few years and then gained my teaching qualification in secondary school/sixth form Science in Birmingham. I recently started learning more about the technology industry after completing an online distance-learning course in cyber security. I was on Facebook one day in June and I saw an advert for the first ever School of Code Bootcamp, and I just decided to dive in and go for it! Do you think there is a diversity issue in the tech sector? Has it affected you in any way? I definitely think there is a major problem in the technology industry, in terms of diversity. There are far too many leadership and management positions taken up by upper/middle class, white men. There needs to be more outreach work done to attract more women and ethnic minority people into this sector, as well as continuing to work with them afterwards, to prevent them from leaving tech in the middle of their careers! This has not affected me in any direct way, but as a female from an engineering background, which is also a very male-dominated sector, I have experienced some gender discrimination and credit for work I produced being given to someone else. Why do you think making technology accessible to all is important? Technology opens up so many doors to some really exciting and life-fulfilling work. It really is the future of this planet, and in order to keep improving the progress of the global economy and human society, we need more and more advanced technology and methods, daily. This means that there is a dire need for a large number of highly competent employees working continuously in the tech sector. What do you think the future looks like for people working in the tech industry? Will larger companies strive to diversify their workforce, and, why should they? In my opinion, the future looks extremely exciting and progressive! Technology will only become more and more futuristic, and we could be looking at getting more into the sci-fi age, come the next few centuries, give or take. So, the people who will work in the tech sector will be highly sought after – lucky them! I would hope though, that large corporations will change their employee recruitment policies, in terms of a more diverse intake, if they truly want to reach the top of their games, with maximum efficiency and employee wellbeing. School of Code encourages the honing of soft skills through networking, team work and project management. Do you think these skills are vital for the future of the tech industry and attracting a new generation, shaking off the stereotype that all coders are solitary beings? Why? Yes, definitely – soft skills are just as important, if not slightly more, than the technical aptitude of an employee in the tech industry! With collaboration and a business acumen, we can bring the world of technology together and use it to make a better life for every human being on this planet. The technology industry needs to show its solidarity, not its divisiveness, in attracting the next generation of young techies, if it wants to maintain its global outreach. What advice would you give to someone who wanted to get into the tech sector but may be put off by the common preconception that it is made up of male white privilege? I would say go for it, dive in at the deep end and come out the other side the better person in the room! Have the courage to stand up for your beliefs and dreams, and don't ever let anyone tell you or make you feel like you don't deserve to be standing there with everyone else in the room – pick your battles wisely, become more industry – and people-savvy, choose your opportune moment to shine, and you'll see all the other techies begging you to work with them, not even for them! Find out more about School of Code.  Download some of the books the Bootcampers found useful during the course: Thinking in HTML Thinking in CSS Thinking in JS series  MEAN Web Development React and React Native Responsive Web Design
Read more
  • 0
  • 0
  • 4222

article-image-sports-analytics-empowering-better-decision-making
Amey Varangaonkar
14 Nov 2017
11 min read
Save for later

Expert Insights: How sports analytics is empowering better decision-making

Amey Varangaonkar
14 Nov 2017
11 min read
Analytics is slowly changing the face of the sports industry as we know it. Data-driven insights are being used to improve the team and individual performance, and to get that all-important edge over the competition. But what exactly is sports analytics? And how is it being used? What better way to get answers to these questions than asking an expert himself! [author title="Gaurav Sundararaman"]A Senior Stats Analyst at ESPN currently based in Bangalore, India. With over 10 years of experience in the field of analytics, Gaurav worked as a Research Analyst and a consultant in the initial phase of his career. He then ventured into sports analytics in 2012, and played a major role in the Analytics division of SportsMechanics India Pvt. Ltd. where he was the Analytics Consultant for the T20 World Cup winning West Indies team in 2016.[/author]   In this interview, Gaurav takes us through the current landscape of sports analytics, and talks about how analytics is empowering better decision-making in sports. Key Takeaways Sports analytics pertains to finding actionable, useful insights from sports data which teams can use to gain competitive advantage over the opposition Instincts backed by data make on and off-field decisions more powerful and accurate Rise of IoT and wearable technology has boosted sports analytics. With more data available for analysis, insights can be unique and very helpful Analytics is being used in sports right from improving player performance to optimizing ticket prices and understanding fan sentiments Knowledge of  tools for data collection, analysis and visualization such as R, Python and Tableau is essential for a sports analyst Thorough understanding of the sport, up to date skillset and strong communication with players and management are equally important factors to perform efficient analytics Adoption of analytics within sports has been slow, but steady. More and more teams are now realizing the benefits of sports analytics and are adopting an analytics-based strategy Complete Interview Analytics today is finding widespread applications in almost every industry today - how has the sports industry changed over the years? What role is analytics playing in this transformation? The sports industry has been relatively late in adopting analytics. That said, the use of analytics in sports has also varied geographically. In the west, analytics plays a big role in helping teams, as well as individual athletes, take up decisions. Better infrastructure and a quick adoption of the latest trends in technology is an important factor here. Also, investment in sports starts from a very young age in the west, which also makes a huge difference.  In contrast, many countries in Asia are still lagging behind when it comes to adopting analytics, and still leverage on traditional techniques to solve problems. A combination of analytics with traditional knowledge from experience would go a long way in helping teams, players and businesses succeed. Previously the sports industry was a very close community. Now with the advent of analytics, the industry has managed to expand its horizon. We witness more non-sportsmen playing a major part in the decision making. They understand the dynamics of the sports business and how to use data-driven insights to influence the same. Many major teams across different sports such as Football (Soccer), Cricket, American Football, Basketball and more have realized the value of data and analytics. How are they using it? What advantages does analytics offer to them? One thing I firmly believe is that analytics can’t replace skills or can’t guarantee wins. What it can do is ensure there is logic towards certain plans and decisions. Instincts backed by data make the decisions more powerful. I always tell the coaches or players – Go with your gut and instincts as Plan A. If it does not work out your fall back could be Plan B based on trends and patterns derived from data. It turns out to be a win-win for both. Analytics offers a neutral perspective which sometimes players or coaches may not realize. Each sport has a unique way of applying analytics to make decisions and obviously, as analysts, we need to understand the context and map the relevant data. As far as using the analytics is concerned, the goals are pretty straightforward - be the best, beat the opponents and aim for sustained success. Analytics helps you achieve each of these objectives. The rise of IoT and wearable technology over the last few years has been incredible. How has it affected sports, and sports analytics, in particular? It is great to see that many companies are investing in such technologies. It is important to identify where wearables and IoT can be used in sport and where it can cause maximum impact. These devices allow in-game monitoring of players, their performance, and their current physical state. Also, I believe more than on-field, these technologies would be very useful in engaging fans as well. Data derived from these devices could be used in broadcasting as well as providing a good experience for fans in the stadiums. This will encourage more and more people to watch games in stadiums and not in the comfort of their homes. We have already seen a beginning with a few stadiums around the world leveraging technology (IoT). The Mercedes Benz stadium (home of Atlanta Falcons) has a high tech stadium powered by IBM. Sacramento is building a state-of-the-art facility for the Sacramento Kings. This is just the start, and it will only get better with time. How does one become a sports analyst? Are there any particular courses/certifications that one needs to complete in order to become one? Can you share with us your journey in sports analytics? To be honest there are no professional courses yet in India to become an Analyst. There are a couple of colleges which have just started offering Sports Analytics as a course in their Post-Graduation Program. However, there are a few companies (Sports Mechanics and Kadamba Technologies in Chennai) that offer jobs that can enable you to become a Sports Analyst if you are really good.  If you are a freelancer then my advice would be to ensure you brand yourself well and showcase your knowledge through social media platforms and get a breakthrough via contacts. Post my MBA, Sports Mechanics (a leader in this space), a company based in Chennai were looking for someone to work to start their data practice. I was just lucky to be at the right place at the right time. I worked for 4 years there and was able to learn a lot about the industry and what works and what does not. Being a small company, I was lucky to don multiple hats and work on different projects across the value chain. I moved and joined the lovely team Of ESPNCricinfo where I work for their stats team. What are the tools and frameworks that you use for your day to day tasks? How do they make your work easier? There are no specific tools or frameworks. It depends on the enterprise you are working for. Usually, they are proprietary tools of the company. Most of these tools are used either to collect, mine or visualize data. Interpreting the information and presenting it in a manner in which users understand is important and that is where certain applications or frameworks are used. However to be ready for the future it would be good to be skilled on tools that support data collection, analysis and visualization namely R, Python and Tableau, to name a few. Do sports analysts have to interact with players and the coaching staff directly? How do you communicate your insights and findings with the relevant stakeholders? Yes, they have to interact with players and management directly. If not, the impact will be minimal. Communicating insights is very important in this industry. Too much analysis could lead to paralysis. We need to identify what exactly each player or coach is looking for, based on their game and try to provide them the information in a crisp manner which helps them make decisions on and off the field. For each stakeholder the magnitude of the information provided is different. For the coach and management, the insights can be in detail while for the players we need to keep it short and to the point. The insights you generate must not only be limited to enhancing the performance of a team on the field but much more than that. Could you give us some examples? Insights can vary. For the management, it could deal with how to maximise the revenue or save some money in an auction. For coaches, it could help them know about his team’s as well as the opposition’s strengths and weaknesses from a different perspective. For captains, data could help in identifying some key strategies on the field. For example, in Cricket, it could help the captain determine which bowler to bring on to which opposition batsmen, or where to place the fielders. Off the field, one area where analytics could play a big role would be in grassroots development and tracking of an athlete from an early age to ensure he is prepared for the biggest stage. Monitoring performance, improving physical attributes by following a specific regimen, assessing injury record and designing specific training programs, etc. are some ways in which this could be done. What are some of the other challenges that you face in your day to day work? Growth in this industry can be slow sometimes. You need to be very patient, work hard and ensure you follow the sport very closely. There are not many analytical obstacles as such, but understanding the requirements and what exactly the data needs are can be quite a challenge. Despite all the buzz, there are quite a few sports teams and organizations who are still reluctant to adopt an analytics-based strategy – why do you think is that the case? What needs to change? The reason for the slow adoption could be the lack of successful case studies and the awareness. In most sports when so many decisions are taken on the field sometimes the players' ability and skill seems far more superior to anything else. As more instances of successful execution of data-based trends come up, we are likely to see more teams adopting the data-based strategy. Like I mentioned earlier, analytics needs to be used to make the coach and captain take the most logical and informed decisions. Decision-makers need to be aware of the way it is used and how much impact it can cause.  This awareness is vital towards increasing the adoption of analytics in sports. Where do you see sports analytics in the next 5-10 years? Today in sports many decisions are taken on gut feeling, and I believe there should be a balance. That is where analytics can help. In sports like Cricket, only around 30% of the data is used and there is more emphasis given to video. Meanwhile, if we look at Soccer or Basketball, the usage of data and video analytics is close to 60-70% of its potential. Through awareness and trying out new plans based on data, we can increase usage of analytics in cricket to 60-70 % in the next few years. Despite the current shortcomings, It is fair to say that there is a progressive and positive change at the grassroots level across the world. Data-based coaching and access to technology are slowly being made available to teams as well as budding sportsmen/women. Another positive is that the investment in the sports industry is growing steadily. I am confident that in a couple of years, we will see more job opportunities in sports. Maybe in five years, the entire ecosystem would be more structured and professional. We would witness analytics playing a much bigger role in helping stakeholders make informed decisions, as data-based insights become even more crucial. Lastly, what advice do you have for aspiring sports analysts? My only advice would be - Be passionate, build a strong network of people around you, and constantly be on the lookout for opportunities. Also, it is important to keep updating your skill-set in terms of the tools and techniques needed to perform efficient and faster analytics. Newer and better tools keep coming up very quickly, which make your work easier and faster. Be on the lookout for such tools! One also needs to identify their own niche based on their strengths and try to build on that. The industry is on the cusp of growth and as budding analysts, we need to be prepared to take off when the industry matures. Build your brand and talk to more people in the industry - figure out what you want to do to keep yourself in the best position to grow with the industry.
Read more
  • 0
  • 0
  • 4184

article-image-machine-learning-can-useful-almost-every-problem-domain-interview-sebastian-raschka
Packt Editorial Staff
04 Sep 2017
9 min read
Save for later

Has Machine Learning become more accessible?

Packt Editorial Staff
04 Sep 2017
9 min read
Sebastian Raschka is a machine learning expert. He is currently a researcher at Michigan State University, where he is working on computational biology. But he is also the author of Python Machine Learning, the most popular book ever published by Packt. It's a book that has helped to define the field, breaking it out of the purely theoretical and showing readers how machine learning algorithms can be applied to everyday problems. Python Machine Learning was published in 2015, but Sebastian is back with a brand new edition, updated and improved for 2017, working alongside his colleague Vahid Mirjalili. We were lucky enough to catch Sebastian in between his research and working on the new edition to ask him a few questions about what's new in the second edition of Python Machine Learning, and to get his assessment of what the key challenges and opportunities in data science are today. What's the most interesting takeaway from your book? Sebastian Raschka: In my opinion, the key take away from my book is that machine learning can be useful in almost every problem domain. I cover a lot of different subfields of machine learning in my book: classification, regression analysis, clustering, feature extraction, dimensionality reduction, and so forth. By providing hands-on examples for each one of those topics, my hope is that people can find inspiration for applying these fundamental techniques to drive their research or industrial applications. Also, by using well-developed and maintained open source software, makes machine learning very accessible to a broad audience of experienced programmers as well as people who are new to programming. And introducing the basic mathematics behind machine learning, we can appreciate machine learning being more than just black box algorithms, giving readers an intuition of the capabilities but also limitations of machine learning, and how to apply those algorithms wisely. What's new in the second edition? SR: As time and the software world moved on after the first edition was released in September 2015, we decided to replace the introduction to deep learning via Theano. No worries, we didn't remove it! But it got a substantial overhaul and is now based on TensorFlow, which has become a major player in my research toolbox since its open source release by Google in November 2015. Along with the new introduction to deep learning using TensorFlow, the biggest additions to this new edition are three brand new chapters focussing on deep learning applications: A more detailed overview of the TensorFlow mechanics, an introduction to convolutional neural networks for image classification, and an introduction to recurrent neural networks for natural language processing. Of course, and in a similar vein as the rest of the book, these new chapters do not only provide readers with practical instructions and examples but also introduce the fundamental mathematics behind those concepts, which are an essential building block for understanding how deep learning works. What do you think is the most exciting trend in data science and machine learning? SR: One interesting trend in data science and machine learning is the development of libraries that make machine learning even more accessible. Popular examples include TPOT and AutoML/auto-sklearn. Or, in other words, libraries that further automate the building of machine learning pipelines. While such tools do not aim to replace experts in the field, they may be able to make machine learning even more accessible to an even broader audience of non-programmers. However, being to interpret the outcomes of predictive modeling tasks and being to evaluate the results appropriately will always require a certain amount of knowledge. Thus, I see those tools not as replacements but rather as assistants for data scientists, to automate tedious tasks such as hyperparameter tuning. Another interesting trend is the continued development of novel deep learning architectures and the large progress in deep learning research overall. We've seen many interesting ideas from generative adversarial neural networks (GANs), densely connected neural networks (DenseNets), and  ladder networks. Large profress has been made in this field thanks to those new ideas and the continued improvements of deep learning libraries (and our computing infrastructure) that accelerate the implementation of research ideas and the development of these technologies in industrial applications. How has the industry changed since you first started working? SR: Over the years, I have noticed that more and more companies embrace open source, i.e., by sharing parts of their tool chain in GitHub, which is great. Also, data science and open source related conferences keep growing, which means more and more people are not only getting interested in data science but also consider working together, for example, as open source contributors in their free time, which is nice. Another thing I noticed is that as deep learning becomes more and more popular, there seems to be an urge to apply deep learning to problems even if it doesn't necessarily make sense -- i.e., the urge to use deep learning just for the sake of using deep learning. Overall, the positive thing is that people get excited about new and creative approaches to problem-solving, which can drive the field forward. Also, I noticed that more and more people from other domains become more familiar with the techniques used in statistical modeling (thanks to "data science") and machine learning. This is nice, since good communication in collaborations and teams is important, and a given, common knowledge about the basics makes this communication indeed a bit easier. What advice would you give to someone who wants to become a data scientist? SR: I recommend starting with a practical, introductory book or course to get a brief overview of the field and the different techniques that exist. A selection of concrete examples would be beneficial for understanding the big picture and what data science and machine learning is capable of. Next, I would start a passion project while trying to apply the newly learned techniques from statistics and machine learning to address and answer interesting questions related to this project. While working on an exciting project, I think the practitioner will naturally become motivated to read through the more advanced material and improve their skill. What are the biggest misunderstandings and misconceptions people have about machine learning today? Well, there's this whole debate on AI turning evil. As far as I can tell, the fear mongering is mostly driven by journalists who don't work in the field and are apparently looking for catchy headlines. Anyway, let me not iterate over this topic as readers can find plenty of information (from both viewpoints) in the news and all over the internet. To say it with one of the earlier comments, Andrew Ng's famous quote: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars." What's so great about Python? Why do you think it's used in data science and beyond? SR: It is hard to tell which came first: Python becoming a popular language so that many people developed all the great open-source libraries for scientific computing, data science, and machine learning or Python becoming so popular due to the availability of these open-source libraries. One thing is obvious though: Python is a very versatile language that is easy to learn and easy to use. While most algorithms for scientific computing are not implemented in pure Python, Python is an excellent language for interacting with very efficient implementations in Fortran, C/C++, and other languages under the hood. This, calling code from computationally efficient low-level languages but also providing users with a very natural and intuitive programming interface, is probably one of the big reasons behind Python's rise to popularity as a lingua franca in the data science and machine learning community. What tools, frameworks and libraries do you think people should be paying attention to? There are many interesting libraries being developed for Python. As a data scientist or machine learning practitioner, I'd especially want to highlight the well-maintained tools from Python core scientific stack: -       NumPy and SciPy as efficient libraries for working with data arrays and scientific computing -       Pandas to read in and manipulate data in a convenient data frame format -       matplotlib for data visualization (and seaborn for additional plotting capabilities and more specialized plots) -       scikit-learn for general machine learning There are many, many more libraries that I find useful in my project. For example, Dask is an excellent library for working with data frames that are too large to fit into memory and to parallelize computations across multiple processors. Or take TensorFlow, Keras, and PyTorch, which are all excellent libraries for implementing deep learning models. What does the future look like for Python? In my opinion, Python's future looks very bright! For example, Python has just been ranked as top 1 programming language by IEEE Spectrum as of July 2017. While I mainly speak of Python from the data science/machine learning perspective, I heard from many people in other domains that they appreciate Python as a versatile language and its rich ecosystem of libraries. Of course, Python may not be the best tool for every problem, it is very well regarded as a "productive" language for programmers who want to "get things done." Also, while the availability of plenty of libraries is one of the strengths of Python, I must also highlight that most packages that have been developed are still being exceptionally well maintained, and new features and improvements to the core data science and machine learning libraries are being added on a daily basis. For instance, the NumPy project, which has been around since 2006, just received a $645,000 grant to further support its continued developed as a core library for scientific computing in Python. At this point, I also want to thank all the developers of Python and its open source libraries that have made Python to what it is today. It's an immensely useful tool to me, and as Python user, I also hope you will consider getting involved in open source -- every contribution is useful and appreciated, small documentation fixes, bug fixes in the code, new features, or entirely new libraries. Again, and with big thanks to the awesome community around it,  I think Python's future looks very bright.
Read more
  • 0
  • 0
  • 4098
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-git-like-all-other-version-control-tools-exists-to-solve-for-one-problem-change-joseph-muli-and-alex-magana-interview
Packt Editorial Staff
09 Oct 2018
5 min read
Save for later

“Git, like all other version control tools, exists to solve for one problem: change” - Joseph Muli and Alex Magana [Interview]

Packt Editorial Staff
09 Oct 2018
5 min read
An unreliable versioning tool makes product development a herculean task. Creating and enforcing checks and controls for the introduction, scrutiny, approval, merging, and reversal of changes in your source code, are some effective methods to ensure a secure development environment. Git and GitHub offer constructs that enable teams to conduct version control and collaborative development in an effective manner.  When properly utilized, Git and GitHub promote agility and collaboration across a team, and in doing so, enable teams to focus and deliver on their mandates and goals. We recently interviewed Joseph Muli and Alex Magana, the authors of Introduction to Git and GitHub course. They discussed the various benefits of Git and GitHub while sharing some best practices and myths. Author Bio Joseph Muli loves programming, writing, teaching, gaming, and travelling. Currently, he works as a software engineer at Andela and Fathom, and specializes in DevOps and Site Reliability. Previously, he worked as a software engineer and technical mentor at Moringa School. You can follow him on LinkedIn and Twitter. Alex Magana loves programming, music, adventure, writing, reading, architecture, and is a gastronome at heart. Currently, he works as a software engineer with BBC News and Andela. Previously, he worked as a software engineer with SuperFluid Labs and Insync Solutions. You can follow him on LinkedIn or GitHub. Key Takeaways Securing your source code with version control is effective only when you do it the right way. Understanding the best practices used in version control can make it easier for you to get the most out of Git and GitHub. GitHub is loaded with an elaborate UI. It’ll immensely help your development process to learn how to navigate the GitHub UI and install the octo tree. GitHub is a powerful tool that is equipped with useful features. Exploring the Feature Branch Workflow and other forking features, such as submodules and rebasing, will enable you to make optimum use of the many features of GitHub. The more elaborate the tools, the more time they can consume if you don’t know your way through them. Master the commands for debugging and maintaining a repository, to speed up your software development process. Keep your code updated with the latest changes using CircleCI or TravisCI, the continuous integration tools from GitHub. The struggle isn’t over unless the code is successfully released to production. With GitHub’s release management features, you can learn to complete hiccup-free software releases. Full Interview Why is Git important? What problem is it solving? Git, like all other version control tools, exists to solve for one problem, change. This has been a recurring issue, especially when coordinating work on teams, both locally and distributed, that specifically being an advantage of Git through hubs such as GitHub, BitBucket and Gitlab. The tool was created by Linus Torvalds in 2005 to aid in development and contribution on the Linux Kernel. However, this doesn’t necessarily limit Git to code any product or project that requires or exhibits characteristics such as having multiple contributors, requiring release management and versioning stands to have an improved workflow through Git. This also puts into perspective that there is no standard, it’s advisable to use what best suits your product(s). What other similar solutions or tools are out there? Why is Git better? As mentioned earlier, other tools do exist to aid in version control. There are a lot of factors to consider when choosing a version control system for your organizations, depending on product needs and workflows. Some organizations have in-house versioning tools because it suits their development. Some organizations, for reasons such as privacy and security or support, may look for an integration with third-party and in-house tools. Git primarily exists to provide for a faster and distributed version system, that is not tied to a central repository, hub or project. It is highly scalable and portable. Other VC tools include Apache SubVersion, Mercurial and Concurrent Versions System (CVS). How can Git help developers? Can you list some specific examples (real or imagined) of how it can solve a problem? A simple way to define Git’s indispensability is enabling fast, persistent and accessible storage. This implies that changes to code throughout a product’s life cycle can be viewed and updated on demand, each with simple and compact commands to enable the process. Developers can track changes from multiple contributors, blame introduced bugs and revert where necessary. Git enables multiple workflows that align to practices such as Agile e.g. feature branch workflows and others including forking workflows for distributed contribution, i.e. to open source projects. What are some best tips for using Git and GitHub? These are some of the best practices you should keep in mind while learning or using Git and GitHub. Document everything Utilize the README.MD and wikis Keep simple and concise naming conventions Adopt naming prefixes Correspond a PR and Branch to a ticket or task. Organize and track tasks using issues. Use atomic commits [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘7 tips for using Git and GitHub the right way’.[/box] What are the myths surrounding Git and GitHub? Just as every solution or tool has its own positives and negatives, Git is also surrounded by myths one should be aware of. Some of which are: Git is GitHub Backups are equivalent to version control Git is only suitable for teams To effectively use Git, you need to learn every command to work [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘4 myths about Git and GitHub you should know about’.  [/box] GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitHub introduces ‘Experiments’, a platform to share live demos of their research projects  
Read more
  • 0
  • 0
  • 4093

article-image-why-start-learning-spring-book-interview-greg-turnquist
Packt
05 Feb 2018
5 min read
Save for later

Why you should start learning Spring Boot: An interview with Greg Turnquist

Packt
05 Feb 2018
5 min read
If you're not sure what Spring Boot is exactly, or why it's becoming such an important part of the Java development ecosystem you're in the right place. We'll explain: Spring Boot is a micro framework built by the team at Pivotal that has been designed to simplify the bootstrapping and development of new Spring applications. To put it simply, it gets you up and running as quickly as possible. Greg Turnquist has has significant experience with the Spring team at Pivotal for some time - which means he is in the perfect position to give an insight on the software. We spoke to Greg recently to shed some light on Spring Boot, as well as his latest book Learning Spring Boot 2.0 - Second Edition. Greg on tweets as @gregturn on Twitter. Why you should use Spring Boot Packt: Spring Boot is a popular tool for building production-grade enterprise applications in Spring. What do you think are the 3 notable features of Spring Boot that stands apart from the other tools available out there? Greg Turnquist: The three characteristics I have found interesting are: Simplicity and ease to build new apps Boot's ability to back off when you define custom components, and Boot's ability to respond to community feedback as it constantly adds valued features Packt: You have a solid track record of developing software. What tools do you use on a day-to-day basis? GT: As a member of the Spring team, I regularly use IntelliJ IDEA, Slack, Gmail, Homebrew, various CI tools like CircleCI/TravisCI/Bamboo, Maven/Gradle, and Sublime Text 3. How to start using Spring Boot Packt: For a newbie developer, would you suggest getting started with Spring first, before trying their hand at Spring Boot? Or is Boot so simple to learn that a Java Developer could pick it up straight away and build applications? GT: In this day and age, there is no reason to not start with Spring Boot. The programming model is so simple and elegant that you can have a working web app in five minutes or less. And considering Spring Boot IS Spring, the argument is almost false. Packt: How does the new edition of Learning Spring Boot prepare readers to be industry-ready? For existing Spring and Spring Boot developers, what are the aspects to look forward to in your book? GT: This book contains a wide range of practical examples covering areas such as Web, Data Access, developer tools, Messaging, WebSockets, and Security. By building a realistic example throughout the book on top of Java's de facto standard toolkit, it should be easy to learn valuable lessons needed in today's industry. Additionally, using Project Reactor throughout, the reader will be ready to build truly scalable apps. As the Spring portfolio adopts support from Project Reactor, this is the only book in the entire market focused on that paradigm. Casting all these real world problems in light of such a powerful, scalable toolkit should be eagerly received. I truly believe this book helps bend the curve so that people can get operational, faster, and are able to meet their needs. How well does Spring Boot integrate with JavaScript and JavaScript frameworks? Packt: You also work a bit on JavaScript. Where do you think Spring and Spring Boot support for full-stack development with JS frameworks is going ahead? GT: Spring Boot provides first class support for either dropping in WebJars or self-compiled JavaScript modules, such as with Webpack. The fact that many shops are moving off of Ruby on Rails and onto Spring Boot is the evidence that Boot has it all needed to build strong, powerful apps with full blown front ends to meet the needs of development shops. What does the future hold for Spring Boot? Packt: Where do you see the future of Spring Boot's development going? What changes or improvements can the community expect in future releases? GT: Spring Boot has a dedicated team backing its efforts that at the same time is very respectful of community feedback. Adopting support for reactive programming is one such example that has been in motion for over two years. I think core things like the "Spring way" aren't going anywhere since they are all proven approaches. At the same time, support for an increasing number of 3rd party libraries and more cloud providers will be something to keep an eye on. Part of the excitement is not seeing exactly where things are going as well, so I look forward to the future of Spring Boot along with everyone else. Why you should read Learning Spring Boot Packt: Can you give Developers 3 reasons on why they should pick up your book? Are you interested in the hottest Java toolkit that is out there? Do you want to have fun building apps? And do you want to take a crack at the most revolutionary addition made to the Spring portfolio (Project Reactor)? If you answered yes to any of those, then this book is for you.
Read more
  • 0
  • 0
  • 3990

article-image-fastly-svp-adam-denenberg-on-fastlys-new-edge-resources-edge-computing-fog-computing-and-more
Bhagyashree R
30 Sep 2019
9 min read
Save for later

Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more

Bhagyashree R
30 Sep 2019
9 min read
Last month, Fastly, a provider of an edge cloud platform, introduced a collection of resources to help developers learn the ins and outs of popular cloud solutions. The collection consists of step-by-step tutorials and ready-to-deploy code that developers can customize, and deploy to their Fastly configuration. We had the opportunity to interview Adam Denenberg, Fastly’s SVP of Customer Solutions, to get more insight into this particular project and other initiatives Fastly is taking to empower developers. We also grabbed this opportunity to talk to Denenberg about the emergence and growth of edge computing and fog computing and what it all means for the industry. What are the advantages of edge computing over cloud? Cloud computing is a centralized service that provides computing resources including servers, storage, databases, networking, software, analytics, and intelligence on demand. It is flexible, scalable, enables faster innovation, and has revolutionized the way people store and interact with data. However, because it is a centralized system, it can cause issues such as higher latency, limited bandwidth, security issues, and the requirement of high-speed internet connectivity. This is where edge computing comes in - to address these limitations. In essence, it’s a decentralized cloud. “Edge computing is the move to put compute power and logic as close to the end-user as possible. The edge cloud uses the emerging cloud computing serverless paradigm in which the cloud provider runs the server and dynamically manages the allocation of machine resources,” Denenberg explains. When it comes to making real-time decisions edge computing, can be very effective. He adds, “The average consumer expects speedy online experiences, so when milliseconds matter, the advantage of processing at the edge is that it is an ideal way to handle highly dynamic and time-sensitive data quickly. “In contrast, running modern applications from a central cloud poses challenges related to latency, ability to pre-scale, and cost-efficiency.” What is the difference between fog computing and edge computing? Fog computing and edge computing can appear very similar. They both involve pushing intelligence and processing capabilities closer to the origin of data. However, the difference lies in where the location of intelligence and compute power is placed. Explaining the difference between the two, Denenberg said, “fog computing, a term invented by Cisco, shares some similar design goals as edge computing, such as reducing latency to the end-user request and providing access to compute resources in a decentralized model. After that, things begin to differ.” He adds, “On the one hand, fog computing has a focus on use cases like IoT and sensors. This allows enterprises to extend their network from a central cloud closer to their devices and sensors, while maintaining a reliance on the central cloud. “Edge computing, on the other hand, is also about moving compute closer to the end-user, but doing so in a way that removes the dependency on the central cloud as much as possible. By collocating compute and storage (cache) on Fastly’s edge cloud, our customers are able to build very complex, global-scale applications and digital experiences without any dependency on a centralized compute resources.” Will edge computing replace cloud computing? A short answer to this question would be “not really.” “I don’t think anything at this moment will fully replace the central cloud,” Denenberg explains. “People said data centers were dead as soon as AWS took off, and, while we certainly saw a dramatic shift in where workloads were being run over the last decade, plenty of organizations still operate very large data centers. “There will continue to be certain workloads such as large-scale offline data processing, data warehouses, and the building of machine learning models that are much more suited to an environment that requires high compute density and long and complex processing times that operate on extremely massive data sets with no time sensitivity.” What is Fastly? Fastly’s story started back in 2008 when Artur Bergman, its founder, was working at Wikia. Three years later, he founded Fastly, headquartered in San Francisco, with its branches in four cities including London, Tokyo, New York, and Denver. Denenberg shared that Fastly’s edge cloud platform was built to address the limitations in content delivery networks (CDNs). “Fastly is an edge cloud platform built by developers, to empower developers. It came about as a result of our founder Artur Bergman's experience leading engineering at Wikia, where his passion for delivering fast, reliable, and secure online experiences for communities around the world was born. So he saw firsthand that CDNs -- which were supposed to address this problem -- weren't equipped to enable the global, real-time experiences needed in the modern era.” He further said, “To ensure a fast, reliable, and secure online experience, Fastly developed an edge cloud platform designed to provide unprecedented, real-time control, and visibility that removes traditional barriers to innovation. Knowing that developers are at the heart of building the online experience, Fastly was built to empower other developers to write and deploy code at the edge. We did this by making the platform extremely accessible, self-service, and API-first.” Fastly’s new edge cloud resources Coming to Fastly’s new edge cloud resources, Denenberg shared the motivation behind this launch. He said, “We’re here to serve the developer community and allow them to dream bigger at the edge, where we believe the future of the web will be built. This new collection of recipes and tutorials was born out of countless collaborations and problem-solving discussions with Fastly's global community of customers. Fastly's new collection of edge cloud resources make it faster and safer for developers to discover, test, customize, and deploy edge cloud solutions.” Currently, Fastly has shared 66 code-based edge cloud solutions covering aspects like authentication, image optimization, logging, and more. It plans to add more solutions to the list in the near future. Denenberg shared, “Our initial launch of 66 recipes and four solution patterns were created from some of the most common and valuable solutions we’ve seen when working with our global customer base. However, this is just the beginning - many more solutions are on our radar to launch on a regular cadence. This is what has us really excited-- as we expose more of these solutions to customers, the more inspiration they have to go even further in their work, which creates a remarkable flywheel of innovation on our edge cloud.” Challenges when developing on the edge When asked about what edge cloud solutions Denenberg thinks developers often find difficult, he said, “I think difficulty is a tricky thing to address because engineering is a lot of times about tradeoffs. Those tradeoffs are most often realized when pursuing instant scalability, being able to run edge functions everywhere, and achieving low latency and microsecond boot time. He adds, “NoSQL saw tremendous growth because it presented the ability to achieve scale with very reasonable trade-offs based on the types of applications people were building that traditional SQL databases made very difficult, from an architectural perspective, like scaling writes linearly to a cluster easily, for example. So for me, given the wide variety of applications our customers can build, I think it’s about taking advantage of our platform in a way that improves the overall user experience, which sometimes just requires a shifting of the mindset in how those applications are architected.” We asked Denenberg whether other developers will be able to pitch in to expand this collection of resources. “We are already talking with customers who are excited to share what they have built on our platform that might allow others to achieve enhanced online experiences for their end users,” he told us. “Fastly has an internal team dedicated to reviewing the solutions customers are interested in sharing to ensure they have the same consistency and coding style that mirrors how we would publish them internally. We welcome the sharing of innovation from our customer base that continues to inspire us through their work on the edge.” Other initiatives by Fastly to empower developers Fastly is continuously contributing towards making the internet more trustworthy and safer by getting involved in projects like QUIC, Encrypted SNI, and WebAssembly. Last year, Fastly made three of its projects available on Fastly Labs: Terrarium, Fiddle, and Insights. Read also: Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Denenberg shared that there are many ways Fastly is contributing to the open source community. “Yes, empowering developers is at the forefront of what we do. As developers are familiar with the open-source caching software that we use, it makes adopting our platform easier. We give away free Fastly services to open source and nonprofit projects. We also continue to work on open source projects, which empower developers to build applications in multiple languages and run them faster and more securely at our edge.” Fastly also constantly tries to improve its edge cloud platform to meet its customers’ needs and empower them to innovate. “As an ongoing priority, we work to ensure that developers have the control and insight into our edge platform they need. To this end, our programmable edge provides developers with real-time visibility and control, where they can write and deploy code to push application logic to the edge. This supports modern application delivery processes and, just as importantly, frees developers to innovate without constraints,” Denenberg adds. He concludes, “Finally, we believe our values empower our community in several ways. At Fastly, we have chosen to grow with a focus on transparency, integrity, and inclusion. To do this, we are building a kind, ethical, and inclusive team that reflects our diverse customer base and the diversity of the developers that are creating online experiences. The more diverse our workforce, the easier it is to attract diverse talent and build technology that provides true value for our developer community across the world.” Follow Adam Denenberg on Twitter: @denen Learn more about Fastly and its edge-cloud platform at Fastly’s official website. More on cloud computing Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results How do AWS developers manage Web apps?
Read more
  • 0
  • 0
  • 3973

article-image-francesco-marchioni-on-quarkus-1-0-and-how-red-hat-increases-the-efficiency-of-cloud-native-applications-interview
Vincy Davis
19 Dec 2019
11 min read
Save for later

Francesco Marchioni on Quarkus 1.0 and how Red Hat increases the efficiency of Cloud-Native applications [Interview]

Vincy Davis
19 Dec 2019
11 min read
Cloud-native applications are an assembly of independent services used to build new applications, optimize existing ones, and connect them in such a way that the applications can skillfully deliver the desired result. More specifically, they are employed to build scalable and fault-tolerant applications in public, private, or hybrid clouds.  Launched in March this year, Quarkus, a new Kubernetes-native framework launched its first stable version, Quarkus 1.0 last month. Quarkus allows Java developers to combine the power of containers, microservices, and cloud-native to build reliable applications. To get a more clear understanding of Cloud-Native Applications with Java and Quarkus, we interviewed Francesco Marchioni, a Red Hat Certified JBoss Administrator (RHCJA) and Sun Certified Enterprise Architect (SCEA) working at Red Hat. Francesco is the author of the book ‘Hands-On Cloud-Native Applications with Java and Quarkus’.  Francesco on Quarkus 1.0 and how Quarkus is bringing Java into the modern microservices and serverless modes of developing Quarkus is coming up with its first stable version Quarkus 1.0 at the end of this month. It is expected to have features like a new reactive core based on Vert.x, a non-blocking security layer, and a new Quarkus ecosystem called ‘universe’. What are you most excited about in Quarkus 1.0? What are your favorite features in Quarkus? One of my favorite features of Quarkus is the reactive core ecosystem which supports both reactive and imperative programming models, letting Quarkus handle the execution model switch for you. This is one of the biggest gains you will enjoy when moving from a monolithic core, which is inherently based on synchronous executions, to a reactive environment that follows events and not just a loop of instructions. I also consider of immense value that the foundation of Quarkus API is a well-known set of APIs that I was already skilled with, therefore I could ramp up and write a book about it in less than one year! How does the Quarkus Java framework compare with Spring? How do you think the Spring API compatibility in Quarkus 1.0 will help developers? Both Quarkus and Spring boot offer a powerful stack of technologies and tools to build Java applications. In general terms, Quarkus inherits its core features from the Java EE, with CDI and JAX-RS being the most evident example. On the other hand, Spring boot follows an alternative modular architecture based on the Spring core. In terms of Microservices, they also differ as Quarkus leverages the Microprofile API while Spring Boot relies on Spring Boot Actuator and Netflix Hystrix. Besides the different stacks, Quarkus has some unique features available out of the box such as Build time class initialization, Kubernetes resources generation and GraalVM native images support. Although there are no official benchmarks, in the typical case of a REST Service built with Quarkus, you can observe an RSS memory reduction to half and a 5x increase in boot speed. In terms of compatibility, it's worth mentioning that, while users are encouraged to use CDI annotations for your applications, Quarkus provides a compatibility layer for Spring dependency injection (e.g. @Autowired) in the form of the spring-di extension. Quarkus is tailored for GraalVM and crafted by best-of-breed Java libraries and standards. How do you think Quarkus brings Java into the modern microservices and serverless modes of developing? Also, why do you think Java continues to be a top programming language for back-end enterprise developers? Although native code execution, in combination with GraalVM, Quarkus is an amazing opportunity for Java. I mean I wouldn't say Quarkus is just native centric as it immediately buys to Java developers an RSS memory reduction to about half, an increase in boot speed, top Garbage Collector performance, plus a set of libraries that are tailored for the JDK. This makes Java a first-class citizen in the microservices ecosystem and I bet it will continue to be one of the top programming languages still for many years. On how his book will benefit Java developers and architects In your book “Hands-On Cloud-Native Applications with Java and Quarkus” you have demonstrated advanced application development techniques such as Reactive Programming, Message Streaming, Advanced configuration hacks. Apart from these, what are the other techniques that can be used for managing advanced application development in Quarkus? Also, apart from the use cases in your book, what other areas/domains can you use Quarkus? In terms of configuration, a whole chapter of the book explores the advanced configuration options which are derived from the MicroProfile config API and the Applications’ profile management, which is a convenient way to shift the configuration options from one environment to another- think for example how easy can be with Quarkus to switch from a Production DB to a Development or Test Database. Besides the use cases discussed in the book, I’d say Quarkus is rather polyvalent, based on the number of extensions that are already available. For example, you can easily extend the example provided in the last chapter, which is about Streaming Data, with advanced transformation patterns and routes provided by the camel extension, thus leveraging the most common integration scenarios. What does your book aim to share with readers? Who will benefit the most from your book? How will your book help Java developers and architects in understanding the microservice architecture? This book is a log of my journey through the Quarkus Land which started exactly one year ago, at its very first internal preview by our engineers. Therefore my first aim is to ignite the same passion to the readers, whatever is their "maturity level" in the IT. I believe developers and architects from the Java Enterprise trenches will enjoy the fastest path to learning Quarkus as many extensions are pretty much the same they have been using for years. Nevertheless, I believe any young developer with a passion for learning can quickly get on board and become proficient with Quarkus by the end of this book. One advantage of younger developers over seasoned ones, like me, is that it will be easier for them to start thinking in terms of services instead of building up monolithic giant applications like we used to do for years. Although microservices patterns are not the main focus of this book, a lot of work has been done to demonstrate how to connect services and not just how to build them up. On how Red Hat uses Quarkus in its products and service Red Hat is already using Quarkus in their products and services. How is it helping Red Hat in increasing the efficiency of your Cloud-Native applications? To be precise, Quarkus is not yet a Red Hat supported Product, but it has already reached an important milestone with the release Quarkus 1.0 final, so it will definitely be included in the list of our supported products, according to our internal productization road-map. That being said, Red Hat is working in increasing the efficiency of your Cloud-Native applications in several ways through a combination of practices, technologies, processes that can be summarized in the following steps that will eventually lead to cloud-native application success: Evolve a DevOps culture and practices to embrace new technology through tighter collaboration. Speed up existing, monolithic applications with simple migration processes that will eventually lead to microservices or mini services. Use ready-to-use developer tools such as application services, to speed up the development of business logic. Openshift tools (web and CLI) is an example of it. Choose the right tool for the right application by using a container-based application platform that supports a large mix of frameworks, languages, and architectures. Provide self-service, on-demand infrastructure for developers using containers and container orchestration technology to simplify access to the underlying infrastructure, give control and visibility to IT operations, and provide application lifecycle management across environments. Automate IT to accelerate application delivery using clear service requirements definition, self-service catalogs that empower users (such as the Container catalog) and metering, monitoring of runtime processes. Implement continuous delivery and advanced deployment techniques to accelerate the delivery of your cloud-native applications. Evolve your applications into a modular architecture by choosing a design that fits your specific needs, such as microservices, a monolith-first approach, or mini services. On Quarkus’ cloud-native security and its competitors Cloud-native applications provide customers with a better time-to-market strategy and also allows them to build, more robust, resilient, scalable, and cost-effective applications. However, they also come with a big risk of potential security breaches. What is your take on cloud-native security for cloud-native applications? Also, what are your thoughts on future-proofing cloud applications? Traditionally, IT security was focused on hardening and the datacenter perimeter—but today, with Cloud applications, that perimeter is fading out. Public and hybrid clouds are shifting responsibility for security and regulatory compliance across the vendors. The adoption of containers at scale requires the adoption of new methods of analyzing, securing, and updating the delivery of applications. As a result, static security policies don’t scale well for containers in the enterprise but need to move to a new concept of security called "continuous container security". This includes some key aspects such as securing the container pipeline and the application, securing the container deployment environment(s) and infrastructure, integrating with enterprise security tools and meeting or enhancing existing security policies. About future-proofing of cloud applications, I believe proper planning and diligence can ensure that a company’s cloud investments withstand future change or become future-proof. It needs to be understood that new generation applications (such as apps for social, gaming and generally mobile apps) have different requirements and generate different workloads. This new generation of applications requires a substantial amount of dynamic scaling and elasticity that would be quite expensive or impossible to achieve with traditional architectures based on old data centers and bare-metal machines. Micronaut and Helidon, the other two frameworks that support GraalVM native images and target cloud-native microservices are often compared to Quarkus. In what aspects are they similar? And in what ways is Quarkus better than and/or different from the other two?   Although it is challenging to compare a set of cutting edge frameworks as some factors might vary in a middle/long term perspective, in general terms I'd say that Quarkus provides the highest level of flexibility especially if you want to combine reactive programming model with the imperative programming model. Also, Quarkus builds on the top of well-known APIs such as CDI, JAX-RS, and Microprofile API, and uses the standard "javax" namespaces to access them. Hence, the transition from the former Enterprise application is quite smooth compared with competitive products. Micronaut too has some interesting features such as support for multiple programming languages (Java, Kotlin, and Groovy the latter being exclusive of Micronaut) and a powerful Command Line Interface (CLI) to generate projects. (A CLI is not yet available in Quarkus, although there are plans to include it in the upcoming versions of it). On the other hand, Helidon is the less polyglot alternative (supports only Java right now) yet, it features a clean and simple approach to Container by providing a self-contained Dockerfile that can be built by simply calling docker build, not requiring anything locally (except the Docker tool of course). Also, the fact that Helidon plays well with GraalVM should be acknowledged as they are both official Oracle products. So, although for new projects the decision is often a matter of personal preferences and individual skills in your team, I'd say that Quarkus leverages existing Java Enterprise experience for faster results. If you want to become an expert in building Cloud-Native applications with Java and Quarkus, learn the end-to-end development guide presented in the book “Hands-On Cloud-Native Applications with Java and Quarkus”. This book will also help you in understanding a wider range of distributed application architectures to use a full-stack framework and give you a headsup on the new features in Quarkus 1.0. About the author Francesco Marchioni is a Red Hat Certified JBoss Administrator (RHCJA) and Sun Certified Enterprise Architect (SCEA) working at Red Hat in Rome, Italy. He started learning Java in 1997, and since then he has followed all the newest application program interfaces released by Sun. In 2000, he joined the JBoss community, when the application server was running the 2.X release. He has spent years as a software consultant, where he has enabled many successful software migrations from vendor platforms to open source products, such as JBoss AS, fulfilling the tight budget requirements necessitated by the current economy. Francesco also manages a blog on 'WildFly Application Server, Openshift, JBoss Projects and Enterprise Applications' focused on Java and JBoss technologies. You can reach him on Twitter and LinkedIn. RedHat’s Quarkus announces plans for Quarkus 1.0, releases its rc1  How Quarkus brings Java into the modern world of enterprise tech Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more Snyk’s JavaScript frameworks security report 2019 shares the state of security for React, Angular, and other frontend projects
Read more
  • 0
  • 0
  • 3958
article-image-unlocking-the-secrets-of-microsoft-power-bi-interview-part-2-of-2-with-brett-powell-founder-of-frontline-analytics-llc
Amey Varangaonkar
10 Oct 2017
12 min read
Save for later

Unlocking the secrets of Microsoft Power BI

Amey Varangaonkar
10 Oct 2017
12 min read
[dropcap]S[/dropcap]elf-service Business Intelligence is the buzzword everyone's talking about today. It gives modern business users the ability to find unique insights from their data without any hassle. Amidst a myriad of BI tools and platforms out there in the market, Microsoft’s Power BI has emerged as a powerful, all-encompassing BI solution - empowering users to tailor and manage Business Intelligence to suit their unique needs and scenarios. [author title="Brett Powell"]A Microsoft Power BI partner, and the founder and owner of Frontline Analytics LLC., a BI and analytics research and consulting firm. Brett has contributed to the design and development of Microsoft BI stack and Power BI solutions of diverse scale and complexity across the retail, manufacturing, financial, and services industries. He regularly blogs about the latest happenings in Microsoft BI and Power BI features at Insight Quest. He is also an organizer of the Boston BI User Group.[/author]   In this two part interview Brett talks about his new book, Microsoft Power BI Cookbook, and shares his insights and expertise in the area of BI and data analytics with a partciular focus on Power BI. In part one of the interview, Brett shared his views on topics ranging from what it takes to be successful in the field of BI & data analytics to why he thinks Microsoft is going to lead the way in shaping the future of the BI landscape. Today in part two, he shares his expertise with us on the unique features that differentiate Power BI from other tools and platforms in the BI space. Key Takeaways Ease of deployment across multiple platforms, efficient data-driven insights, ease of use and support for a data-driven corporate culture is what defines an ideal Business Intelligence solution for enterprises. Power BI leads in self-service BI because it’s the first Software as a Service (SaaS) platform to offer ‘End User BI’ in which anyone, not just a business analyst, can leverage powerful tools to obtain greater value from data. Microsoft Power BI has been identified as a leader in Gartner’s Magic Quadrant for BI and Analytics platforms, and provides a visually rich and easy to access interface that modern business users require. You can isolate report authoring from dataset development in Power BI, or quickly scale up or down a Power BI dataset as per your needs. Power BI is much more than just a tool for reports and dashboards. With a thorough understanding of the query and analytical engines of Power BI, users can customize more powerful and sustainable BI solutions. Part Two: Interview Excerpts - Power BI from a Worm’s Eye View How long have you been a Microsoft Power BI user? How have you been using Power BI on a day-to-day basis? What other tools do you generally end up using alongside Power BI for your work? I’ve been using Power BI from the beginning when it was merely an add-in for Excel 2010. Back then, there was no cloud service and Microsoft BI was significantly tethered to SharePoint but the fundamentals of the Tabular data modelling engine and programming language of DAX was available in Excel to build personal and team solutions. On a day-to-day basis I regularly work with Power BI datasets – that is, the analytical data models inside of Power BI Desktop files. I also work with Power BI report authoring and visualization features and with various data sources for Power BI such as SQL Server. From Learning to Mastering Power BI For someone just starting out using Power BI, what would your recommended learning plan be? For existing users, what does the road to mastering Microsoft Power BI look like? When you’re just starting out I’d recommend learning the essentials of the Power BI architecture and how the components (Power BI service, Power BI Desktop, On-Premises Data Gateway, Power BI Mobile, etc) work together. A sound knowledge on the differences between datasets, reports, and dashboards is essential and an understanding of app workspaces and apps is strongly recommended as this is the future of Power BI content management and distribution. In terms of a learning path you should consider what your role will be on Power BI projects – will you be administering Power BI, creating reports and dashboards, or building and managing datasets? Each of these roles has their own skills, technologies and processes to learn. For example, if you’re going to be designing datasets, a solid understanding of the DAX language and filter context is essential and knowledge of M queries and data access is very important as well. The road to mastering Power BI, in my view, involves a deep understanding of both the M and DAX languages in addition to knowledge of Power BI’s content management, delivery, and administration processes and features. You need to be able to contribute to the full lifecycle of Power BI projects and help guide the adoption of Power BI across an organization. The most difficult or ‘tricky’ aspect of Power BI is thinking of M and DAX functions and patterns in the context of DirectQuery and Import mode datasets. For example, certain code or design patterns which are perfectly appropriate for import models are not suitable for DirectQuery models. A deep understanding of the tradeoffs and use cases for DirectQuery versus default Import (in-memory) mode and the ability to design datasets accordingly is a top characteristic of a Power BI master. 5+ interesting things (you probably didn’t know) about Power BI What are some things that users may not have known about Power BI or what it could do? Can readers look forward to learning to do some of them from your upcoming book: Microsoft Power BI Cookbook? The great majority of learning tutorials and documentation on Power BI involves the graphical interfaces that help you get started with Power BI. Likewise, when most people think of Power BI they almost exclusively think of data visualizations in reports and dashboards – they don’t think of the data layer. While these features are great and professional Power BI developers can take advantage of them, the more powerful and sustainable Power BI solutions require some level of customization and can only be delivered via knowledge of the query and analytical engines of Power BI. Readers of the Power BI Cookbook can look forward to a broad mix of relatively simple to implement tips on usability such as providing an intuitive Fields list for users to more complex yet powerful examples of data transformations, embedded analytics, and dynamic filter behaviours such as with Row-level security models. Each chapter contains granular details on core Power BI features but also highlights synergies available by integrating features within a solution such as taking advantage of an M query expression, a SQL statement, or a DAX metric in the context of a report or dashboard. What are the 3 most striking features that make you love to work with Power BI? What are 3 aspects you would like improved? The most striking feature for me is the ability to isolate report authoring from dataset development. With Power BI you can easily implement a change to a dataset such as a new metric and many report authors can then leverage that change in their visualizations and dashboards as their reports are connected to the published version of the dataset in the Power BI service. A second striking feature is the ‘Query Folding’ of the M query engine. I can write or enhance an M query such that a SQL statement is generated to take advantage of the data source system’s query processing resources. A third striking feature is the ability to quickly scale up or down a Power BI dataset via the dedicated hardware available with Power BI Premium. With Power BI Premium, free users (users without a Pro License) are now able to access Power BI reports and dashboards. The three aspects I’d like to see improved include the following: Currently we don’t have IntelliSense and other common development features when writing M queries. Currently we don’t have display folders for Power BI datasets thus we have to work around this with larger, more complex datasets to maintain a simple user interface. Currently we don’t have Perspectives, a feature of SSAS, that would allow us to define a view of a Power BI dataset such that users don’t see other parts of a data model not relevant to their needs. Is the latest Microsoft Power BI update a significant improvement over the previous version? Any specific new features you’d like to highlight? Absolutely. The September update included a Drillthrough feature that, if configured correctly, enables users to quickly access the crucial details associated with values on their reports such as an individual vendor or a product. Additionally, there was a significant update to Report Themes which provides organizations with more control to define standard, consistent report formatting. Drillthrough is so important that an example of this feature was added to the Power BI Cookbook. Additionally, Power BI usage reporting including the identity of the individual user accessing Power BI content was recently released and this too was included in the Power BI Cookbook. Finally, I believe the new Ribbon Chart will be used extensively as a superior alternative to stacked column charts. Can you tell us a little about the new 'time storyteller custom visual' feature in Power BI? The Timeline Storyteller custom visual was developed by the Storytelling with Data group within Microsoft Research. Though it’s available for inclusion in Power BI reports via the Office Store like other custom visuals, it’s more like a storytelling design environment than a single visual given its extensive configuration options for timeline representations, scales, layouts, filtering and annotations. Like the inherent advantages of geospatial visuals, the linking of Visio diagrams with related Power BI datasets can intuitively call out bottlenecks and otherwise difficult-to-detect relationships within processes. 7 reasons to choose Power BI for building enterprise BI solutions Where does Power BI fall within Microsoft's mission to empower every person and every organization on the planet to achieve more of 1. Bringing people together 2. Living smarter 3. Friction free creativity 4. Fluid mobility? Power BI Desktop is available for free and is enhanced each month with features that empower the user to do more and which remove technical obstacles. Similarly, with no knowledge whatsoever of the underlying technology or solution, a business user can access a Power BI app on their phone or PC and easily view and interact with data relevant to their role. Importantly for business analysts and information workers, Power BI acknowledges the scarcity of BI and analytics resources (ie data scientists, BI developers) and thus provides both graphical interfaces as well as full programming capabilities right into Power BI Desktop. This makes it feasible and often painless to quickly create a working, valuable solution with relatively little experience with the product. We can expect Power BI to support 10GB (and then larger) datasets soon as well as improve its ‘data storytelling’ capabilities with a feature called Bookmarks. In effect, Bookmarks will allow Power BI reports to become like PowerPoint presentations with animation. Organizations will also have greater control over how they utilize the v-Cores they purchase as part of Power BI Premium. This will make scaling Power BI deployments easier and more flexible. I’m personally most interested in the incremental refresh feature identified on the Power BI Premium Roadmap. Currently an entire Power BI dataset (in import mode) is refreshed and this is a primary barrier to deploying larger Power BI datasets. Additionally (though not exclusively by any means), the ability to ‘write’ from Power BI to source applications is also a highly anticipated feature on the Power BI Roadmap. How does your book, Microsoft Power BI Cookbook, prepare its readers to be industry ready? What are the key takeaways for readers from this book? Power BI is built with proven, industry leading BI technologies and architectures such as in-memory, columnar compressed data stores and functional query and analytical programming languages. Readers of the Power BI Cookbook will likely be able to quickly deliver fresh solutions or propose ideas for enhancements to existing Power BI projects. Additionally, particularly for BI developers, the skills and techniques demonstrated in the Power BI Cookbook will generally be applicable across the Microsoft BI stack such as in SQL Server Analysis Services Tabular projects and the Power BI Report Server. A primary takeaway from this book is that Power BI is much more than a report authoring or visualization tool. The data transformation and modelling capabilities of Power BI, particularly combined with Power BI Premium capacity and licensing considerations, are robust and scalable. Readers will quickly learn that though certain Power BI features are available in Excel and though Excel can be an important part of Power BI solutions from a BI consumption standpoint, there are massive advantages of Power BI relative to Excel. Therefore, almost all PowerPivot and Power Query for Excel content can and should be migrated to Power BI Desktop. An additional takeaway is the breadth of project types and scenarios that Power BI can support. You can design a corporate BI solution with a Power BI dataset to support hundreds of users across multiple teams but you can also build a tightly focused solution such as monitoring system resources or documenting the contents of a dataset. If you enjoyed this interview, check out Brett’s latest book, Microsoft Power BI Cookbook. Also, read part one of the interview here to see how and where Power BI fits into the BI landscape and what it takes to stay successful in this industry.
Read more
  • 0
  • 0
  • 3906

article-image-sam-erskine-talks-microsoft-system-center
Samuel Erskine
29 Aug 2014
1 min read
Save for later

Sam Erskine talks Microsoft System Center

Samuel Erskine
29 Aug 2014
1 min read
  How will System Center be used in the next 2 years? Samuel Erskine (MCT), experienced System Centre Admin and Packt author, talks about the future of Microsoft System Center. Samuel shares his insights on the challenges of achieving automation with the Cloud, and effective reporting to determine business ROI.
Read more
  • 0
  • 0
  • 3828

article-image-listen-walmart-labs-director-of-engineering-vilas-veeraraghavan-talks-to-us-about-building-for-resiliency-at-one-of-the-biggest-retailers-on-the-planet-podcast
Richard Gall
04 Jun 2019
2 min read
Save for later

Listen: Walmart Labs Director of Engineering Vilas Veeraraghavan talks to us about building for resiliency at one of the biggest retailers on the planet [Podcast]

Richard Gall
04 Jun 2019
2 min read
As software systems become more distributed, reliability and resiliency have become more and more important. This is one of the reasons why we've seen the emergence of chaos engineering - unreliability causes downtime which, in turn, also causes downtime. And downtime costs money. The impact of downtime is particularly significant for huge organizations that depend on the resilience and reliability of their platforms and applications. Take Uber - not only does the simplicity of the user experience hide its astonishing complexity, but it also has to ensure it can manage that complexity in a way that's reliable. A ride-hailing app couldn't be anywhere near as successful as Uber if it didn't work even if it had 1% downtime. Building resilient software is difficult But actually building resilient systems is difficult. We've recently seen how Uber uses distributed tracing to build more observable systems which can help improve reliability and resiliency in the last podcast episode with Yuri Shkuro but in this week's podcast we're diving even deeper into resiliency with Vilas Veeraraghavan, who's Director of Engineering at Walmart Labs. Vilas has experience at Netflix, the company where chaos engineering originated, but at Walmart, he's been playing a central role in bringing a more evolved version of chaos engineering - which Vilas calls resiliency engineering - to the organization. In this episode we discuss: Whether chaos engineering and resiliency engineering are for everyone Cultural challenges How to get buy-in Getting tooling right https://soundcloud.com/packt-podcasts/walmart-labs-director-of-engineering-vilas-veeraraghavan-on-chaos-engineering-resiliency   “You do not want to get up in the middle of the night get on the call with the VP of engineering and blurt out saying I have no idea what happened. Your answer should be I know exactly what happened because we have tested this exact scenario multiple times. We developed a recipe for it, and here is what we can do… that gives you as an engineer, the power to be able to stand up and say I know exactly what’s going on, I’ll fix it, don’t worry, we’re not going to cause an outage.”
Read more
  • 0
  • 0
  • 3821
article-image-unlocking-insights-how-power-bi-empowers-analytics-for-all-users
Gogula Aryalingam
29 Nov 2024
5 min read
Save for later

Unlocking Insights: How Power BI Empowers Analytics for All Users

Gogula Aryalingam
29 Nov 2024
5 min read
IntroductionIn today’s data-driven world, businesses rely heavily on robust tools to transform raw data into actionable insights. Among these tools, Microsoft Power BI stands out as a leader, renowned for its versatility and user-friendliness. From its humble beginnings as an Excel add-in, Power BI has evolved into a comprehensive enterprise business intelligence platform, competing with industry giants like Tableau and Qlik. This journey of transformation reflects not only Microsoft’s innovation but also the growing need for accessible, scalable analytics solutions.As a data specialist who has transitioned from traditional data warehousing to modern analytics platforms, I’ve witnessed firsthand how Power BI empowers both technical and non-technical users. It has become an indispensable tool, offering capabilities that bridge the gap between data modeling and visualization, catering to everyone from citizen developers to seasoned data analysts. This article explores the evolution of Power BI, its role in democratizing data analytics, and its integration into broader solutions like Microsoft Fabric, highlighting why mastering Power BI is critical for anyone pursuing a career in analytics.The Changing Tide for Data Analysts When you think of business intelligence in the modern era, Power BI is often the first tool that comes to mind. However, this wasn't always the case. Originally launched as an add-in for Microsoft Excel, Power BI quickly evolved into a comprehensive enterprise business intelligence platform in a few years competing with the likes of Qlik and Tableau—a true testament to its capabilities. As a data specialist, what really impresses me about Power BI's evolution is how Microsoft has continuously improved its user-friendliness, making both data modeling and visualizing more accessible, catering to both technical professionals and business users.  As a data specialist, initially working with traditional data warehousing, and now with modern data lakehouse-based analytics platforms, I’ve come to appreciate the capabilities that Power BI brings to the table. It empowers me to go beyond the basics, allowing me to develop detailed semantic layers and create impactful visualizations that turn raw data into actionable insights. This capability is crucial in delivering truly comprehensive, end-to-end analytics solutions. For technical folk like me, by building on our experiences working with these architectures and the deep understanding of the technologies and concepts that drive them, integrating Power BI into the workflow is a smooth and intuitive process. The transition to including Power BI in my solutions feels almost like a natural progression, as it seamlessly complements and enhances the existing frameworks I work with. It's become an indispensable tool in my data toolkit, helping me to push the boundaries of what's possible in analytics. In recent years, there has been a noticeable increase in the number of citizen developers and citizen data scientists. These are non-technical professionals who are well-versed in their business domains and dabble with technology to create their own solutions. This trend has driven the development of a range of low-code/no-code, visual tools such as Coda, Appian, OutSystems, Shopify, and Microsoft’s Power Platform. At the same time, the role of the data analyst has significantly expanded. More organizations are now entrusting data analysts with responsibilities that were traditionally handled by technology or IT departments. These include tasks like reporting, generating insights, data governance, and even managing the organization’s entire analytics function. This shift reflects the growing importance of data analytics in driving business decisions and operations. As a data specialist, I’ve been particularly impressed by how Power BI has evolved in terms of user-friendliness, catering not just to tech-savvy professionals but also to business users. Microsoft has continuously refined Power BI, simplifying complex tasks and making it easy for users of all skill levels to connect, model, and visualize data. This focus on usability is what makes Power BI such a powerful tool, accessible to a wide range of users. For non-technical users, Power BI offers a short learning curve, enabling them to connect to and model data for reporting without needing to rely on Excel, which they might be more familiar with. Once the data is modeled, they can explore a variety of visualization options to derive insights. Moreover, Power BI’s capabilities extend beyond simple reporting, allowing users to scale their work into a full-fledged enterprise business intelligence system. Many data analysts are now looking to deepen their understanding of the broader solutions and technologies that support their work. This is where Microsoft Fabric becomes essential. Fabric extends Power BI by transforming it into a comprehensive, end-to-end analytics platform, incorporating data lakes, data warehouses, data marts, data engineering, data science, and more. With these advanced capabilities, technical work becomes significantly easier, enabling data analysts to take their skills to the next level and realize their full potential in driving analytics solutions.  If you're considering a career in analytics and business intelligence, it's crucial to master the fundamentals and gain a comprehensive understanding of the necessary skills. With the field rapidly evolving, staying ahead means equipping yourself with the right knowledge to confidently join this dynamic industry. The Complete Power BI Interview Guide is designed to guide you through this process, providing the essential insights and tools you need to jump on board and thrive in your analytics journey. ConclusionConclusionMicrosoft Power BI has redefined the analytics landscape by making advanced business intelligence capabilities accessible to a wide audience, from technical professionals to business users. Its seamless integration into modern analytics workflows and its ability to support end-to-end solutions make it an invaluable tool in today’s data-centric environment. With the rise of citizen developers and expanded responsibilities for data analysts, tools like Power BI and platforms like Microsoft Fabric are paving the way for more innovative and comprehensive analytics solutions.For aspiring professionals, understanding the fundamentals of Power BI and its ecosystem is key to thriving in the analytics field. If you're looking to master Power BI and gain the confidence to excel in interviews and real-world scenarios, The Complete Power BI Interview Guide is an invaluable resource. From the core PowerBI concepts to interview preparation and onboarding tips and tricks, The Complete Power BI Interview Guide is the ultimate resource for beginners and aspiring Power BI job seekers who want to stand out from the competition.Author BioGogula is an analytics and BI architect born and raised in Sri Lanka. His childhood was spent dreaming, while most of his adulthood was and is spent working with technology. He currently works for a technology and services company based out of Colombo. He has accumulated close to 20 years of experience working with a diverse range of customers across various domains, including insurance, healthcare, logistics, manufacturing, fashion, F&B, K-12, and tertiary education. Throughout his career, he has undertaken multiple roles, including managing delivery, architecting, designing, and developing data & AI solutions. Gogula is a recipient of the Microsoft MVP award more than 15 times, has contributed to the development and standardization of Microsoft certifications, and holds over 15 data & AI certifications. In his leisure time, he enjoys experimenting with and writing about technology, as well as organizing and speaking at technology meetups. 
Read more
  • 0
  • 0
  • 3809

article-image-interview-christoph-korner
Christoph Körner
14 Jul 2015
3 min read
Save for later

An Interview with Christoph Körner

Christoph Körner
14 Jul 2015
3 min read
Christoph is the CTO of GESIM, a Swiss start-up company, where he is responsible for their simulation software and web interface built with Angular and D3. He is a passionate, self-taught, software developer and web-enthusiast with more than 7 years’ experience in designing and implementing customer-oriented web-based IT solutions. Curious about new technologies and interested in innovation Christoph immediately started using AngularJS and D3 with the first versions. We caught up with him to get his insights into writing with Packt. Why did you decide to write with Packt, what convinced you? Initially, I wasn’t sure about taking on such a big project. However after doing some research and discussing Packt’s reputation with my University colleagues, I was sure I wanted to go ahead. I was also really passionate about the topic, Angular is one of my favourite tools for frontend JavaScript. As a first-time Packt author, what type of support did you receive to develop your content effectively? I started off working independently, researching papers, developing code for the project and reading other books on similar topics, and I got some great initial feedback from my University colleagues. As the project progressed with Packt, I received a lot of valuable feedback from the technical reviewers and the process really provided a lot of valuable and constructive insights. What were your main aims when you began writing with us, and how did Packt in particular match those aims? I was aiming to help other people get started with an awesome front-end technology stack (Angular and D3). I love to look closely at topics that interest me, and enjoy exploring all the angles, both practical and theoretical, and helping others understand it. My book experience was great and Packt allowed me to explore all the theory and practical concepts that the target reader will find really interesting. What was the most rewarding part of the writing experience? The most rewarding part of writing is getting constructive, critical feedback – particularly readers who leave comments about the book as well as the comments from my reviewers. It was a pleasure to have such skilled, motivated and experienced reviewers on-board who helped me develop the concepts of the book. And of course, holding your own book in your hands after 6 months of hard work is a fantastic feeling. What do you see as the next big thing in your field, and what developments are you excited about? The next big thing will be Angular 2.0 and Typescript 1.5; and this will have a big impact on the JavaScript world. Combining – for example – new Typescript features such as annotations with D3js, opening up a whole new world of writing visualizations using annotations for transitions or styling – which will make the code much cleaner. Do you have any advice for new authors? Proper planning is the key, it will take time to write, draw graphics and develop your code at the same time. Don't cut a chapter because you think you don't have time to write it as you wanted – find the time! And get feedback as soon as possible. Experienced authors and users can give very good tips, advice and critique. You can connect with Christoph here: Github: https://github.com/chaosmail Twitter: https://twitter.com/ChrisiKrnr LinkedIn: https://ch.linkedin.com/in/christophkoerner Blog: http://chaosmail.github.io/ Click here to find out more about Christoph’s book Data Visualization with D3 and AngularJS
Read more
  • 0
  • 0
  • 3731