Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Web Development

1802 Articles
article-image-supabase-unleashed-advanced-features-for-typescript-frameworks-and-direct-database-connections
David Lorenz
26 Nov 2024
15 min read
Save for later

Supabase Unleashed: Advanced Features for TypeScript, Frameworks, and Direct Database Connections

David Lorenz
26 Nov 2024
15 min read
This article is an excerpt from the book, "Building Production-Grade Web Applications with Supabase", by David Lorenz. Supabase supercharges web development with scalable backend solutions. With this book, you'll build secure, real-time apps of any size by leveraging Supabase's powerful Row Level Security and eliminating the need for separate backend development.IntroductionSupabase is a powerful platform that integrates Postgres databases with modern developer tools to simplify backend development. While the Supabase client is the recommended approach for interacting with its database, understanding how to establish a direct database connection can expand your options and offer greater flexibility. This section explores the scenarios in which direct access might be necessary, how to configure such connections, and their implications in different project setups. By mastering this complementary skill, you'll unlock additional possibilities for extending and optimizing your applications.Connecting directly to the databaseNote: Building a raw database connection is helpful but complementary knowledge. In this book’s project, we will use the Supabase client and not a direct database connection.At the  end of the day, Supabase comes down to just being a Postgres database with additional services surrounding it like a galaxy. Hence, you can also directly access the database. But why would you ever want to do this?When you work with platforms such as Supabase that make your life easier by providing data storage, file storage, authentication, and more, you often don’t get direct access to the underlying database or your access is extremely limited. The reason is that providers of such platforms often want to safeguard you and themselves from scrapping the project in a way that will break it irrevocably.Having no or limited direct access to your database also means that you cannot extend it with additional features or use libraries of any kind that need direct access (such as sequelize, drizzle, or pg_dump). But with Supabase, you can. So, let’s have a look at how we can connect directly.On a supabase.com project, within the Dashboard (Studio) area, you’ll find the database connection URI of the Postgres database in the Project Settings | Database section. In your local instance, the complete connection URI is shown in the Terminal after running npx supabase start or, for a running instance, when calling npx supabase status. It already contains the username and password, separated with a colon (on your local instance, this is usually postgresql://postgres:postgres@localhost:54322/postgres).Then, you can connect to it with whichever tool you like – for example, via GUIs for databases such as  DBeaver (https://dbeaver.io/).To test if the connection to the database works, I prefer the psql command-line tool. For my local instance, I can simply use one of the following commands:The most minimal way to test a connection is by calling plsql with the connection string in postgresql://username:password@host:port/postgres format, like so:psql 'postgresql://postgres:postgres@localhost:54322/postgres'For real connections (not just local ones), you should prefer a more verbose form that doesn’t keep the password in cleartext in the Terminal and prompts you for the password:psql -h localhost -p 54322 -d postgres -U postgresThis is equal to the longest form where the parameter meanings become self-explanatory:psql --host=localhost --port=54322 --dbname=postgres --username=postgresWith that, you know how to connect to the database if needed. Please be aware that connecting to the database directly and changing data there can be dangerous if you don’t know what you’re doing as there’s no protection layer in between.Next, you’ll learn what you need to do to get immediate TypeScript support with Supabase.Using Supabase with TypeScriptMany projects nowadays use TypeScript instead of JavaScript. In this book, we’ll focus on using Supabase with JavaScript instead of TypeScript. But still, I want to show you how easily it can be used in combination with the Supabase JavaScript clients, and which benefits it brings.Supabase’s npm library comes with TypeScript support out of the box. However, with TypeScript, Supabase can also tell you that the expected data from your database doesn’t exist or help you find the correct table name for your database via autocompletion in your editor.All you need for this is a specific TypeScript file that is generated specifically for your Supabase project. The following steps show how to trigger the Supabase CLI so that it creates such a supabase.ts file containing the needed types for TypeScript – depending on  whether you want the types from a supabase.com project, a local instance, or an instance hosted somewhere else than supabase.com:If you want types for a project based on supabase.com, follow these steps to get a supabase.ts file:I.  Go to https://supabase.com/dashboard/account/tokens and create an access token.II.  Run npx supabase login. You’ll be asked for the access token you just generated.After pressing Enter, it will tell you that the login process has succeeded.III.  Now, open your project via supabase.com; you’ll see a link in your browser that looks like https://supabase.com/dashboard/project/YOUR_PROJECT_ID/.... You’ll also find the same project ID as part of your API URL in the Settings | API section. Copy this project ID.IV.  Generate your custom supabase.ts file by running npx supabase gen types typescript --schema public --project-id YOUR_PROJECT_ID > supabase.tsIf you’re running a local instance, which you should have by now, and want to grab the types from there, you don’t need an access key. You only need to run the following command in your project folder (this is where we ran npx supabase init previously in this chapter):npx supabase gen types typescript --schema public --local  > supabase.tsNote that if you run it outside of the project folder, it won’t know which local instance you’re referring to and fail.If you have a n instance that’s self-hosted on a remote server or running with a provider other than supabase.com, then the previous steps won’t work and you’ll need the generalized variant of fetching types with a direct database connection. To do that, you must generate the supabase.ts file, as follows:I.  Find your database URL (see the Connecting directly to the database section). For example, in your local instance, you’ll find it in the Terminal output after starting Supabase withnpx supabase start. It will be in the following format: postgresql:// USER:PASSWORD@DB_HOST:PORT/postgres.II.  Run npx supabase gen types typescript --schema public --db-url postgresql://USER:PASSWORD @DB_HOST:PORT/postgres > supabase. ts. You’ll receive the file.With this supabase.ts file, it’s easy to make your client type-safe and get proper type hints – simply import the Database type from supabase.ts and pass it to the client creation process. For example, if you want to make the createReqResSupabase({req,res}) function type-safe, you just pass the <Database> type when creating the client:import type { Database } from './supabase'; export const getSupabaseReqResClient = ({ req, res }) => { return createServerClient<Database>(...); }; With that, your Supabase client is type-safe. But let’s understand what that means and what it implies. Say, for example, you’re fetching data from a specific table of your database: the Supabase client will exactly know which columns to fetch and provide proper type support for the returned data.But what happens when I change anything in my instance? Won’t it be outdated immediately as my supabase.ts fi le contains outdated types?Let me try to answer this question with another question: How can you use a new feature on your smartphone if the new feature is only available in a newer software version? The simple answer is that you update the software version.The same goes for the Supabase types. Anytime you change something in your Supabase project and it doesn’t give you the proper TypeScript hints, run npx supabase gen types typescript ... again and you’ll be all set.With this, you can use Supabase in a TypeScript-based project. Before finishing up this chapter, we’ll have a look at some samples of how a Supabase client can be used with other frameworks so that you’re familiar with Supabase’s flexibility.Connecting Supabase to other frameworksImagine that you’ve set up an awesome project with Next.js and Supabase. However, one day, you want to add another feature to your project – an extremely fast API that does complex calculations based on data from your Supabase instance. You notice that JavaScript won’t be the best choice and decide to build a small Python server for this feature that can be called from your primary project.This is what I did in one of my projects at Wahnsinn Design GmbH where the web application, with Supabase at its heart, was built with Next.js. However, a new feature was added using another project with Python. Since there is a Python library for Supabase, the connection was seamless.Since Supabase is not framework-dependent, since it’s just REST APIs, the options for integrations are endless, from C#, Swift, and Kotlin, to JavaScript-based frameworks such as Nuxt or refi ne (you’ll find the most recent list at https://supabase.com/docs).Although we will focus on JavaScript with Next.js in this book, you can use most samples, especially in the upcoming chapters, and translate them into other languages or frameworks with ease. This is because using the Supabase client for the different languages will have similar syntax (as far as the language allows).Let’s have a brief look at how to connect Supabase in Nuxt and Python.Nuxt 3Nuxt is the Vue-based full-stack competitor to Next.js. Connecting with Nuxt comes down to installing the @nuxtjs/supabase package – which, again, is just a convenient wrapper for the @supabase/ supabase-js package.Once installed with npm install @nuxtjs/supabase, add the module to your Nuxt configuration, like so:export default defineNuxtConfig({ modules: ['@nuxtjs/supabase'], })Similar to our Next.js application, add the anon key as SUPABASE_KEY and your API URL as SUPABASE_URL to the .env file of your Nuxt project.Now, you can use the client in Vue composables, like so:<script setup lang="ts"> const supabase = useSupabaseClient(); </script>Alternatively, you can use proper TypeScript types, as we’ve already learned, like so:<script setup lang="ts"> import type { Database } from '~/supabase'; const client = useSupabaseClient<Database>(); </script>You can find a detailed explanation of Nuxt 3 at https://supabase.nuxtjs.org/get-started.PythonPython is fast and has become more popular than ever with many AI applications. This is because it is convenient to use for scientific calculations.The Python Supabase package is one of the easiest to use:1. Install the Supabase package and the dotenv package with pip install supabase and pip install python-dotenv, respectively.2. Create a .env file with two lines, one being your SUPABASE_ANON_KEY=... value and the other being your SUPABASE_URL=... value.3. Initialize the Supabase client in a file such as supabase_client.py, as follows:import os from dotenv import load_dotenv from supabase import create_client, Client load_dotenv() supabase_url: str = os.getenv("SUPABASE_URL") supabase_anon_key: str = os.getenv("SUPABASE_ANON_KEY") my_supabase: Client = create_client(supabase_url, supabase_anon_ key) 4. Use it in any file via import:from supabase_client import my_supabase ...You can find the full Python documentation here: https://supabase.com/docs/reference/ python.I’d be lying if I said all frameworks and languages are equal concerning updates and support within the Supabase community. On the web, there is a general trend toward JavaScript-based environments (Vue, Next, React, Nuxt, Remix, Svelte, Deno, you name it) and at the time of writing this book, several client libraries exist, including JavaScript, Flutter, Python, C#, Swift, and Kotlin.However, it is extremely important to keep in mind that Supabase can be used in any framework or language due to its REST-based nature and that Supabase is also very keen on contributions. Lastly, you can always just use the direct database connection – but with that, you’d be bypassing all authentication and permissions.With this at hand, you are well-positioned to tackle any project with Supabase, no matter if you are using a framework-specific client, the RESTful API, or the direct database connection.ConclusionIn this article, we explored the fundamentals of connecting directly to a Supabase database and the practical use cases it enables. While the Supabase client provides a robust and secure interface, direct access empowers you to extend functionality, integrate with various libraries, and handle advanced operations. We also discussed integrating Supabase with TypeScript and other frameworks like Nuxt and Python, demonstrating its versatility across languages and ecosystems. With these tools and insights, you're equipped to harness Supabase's full potential, whether working within its client or venturing into direct database interactions.Author BioDavid Lorenz is a web software architect and lecturer who began programming at age 11. Before completing university in 2014, he had built a CRM system that automated an entire company and worked with numerous agencies through his own company. In 2015, he secured his first employment as a senior web developer, where he played a pioneering role in using cutting-edge technology and was an early adopter of progressive web apps. In 2017, he became the leading frontend architect and team lead for one of the largest projects at Mercedes-Benz.io, involving massive-scale architecture. Today, David provides valuable insights and guidance to clients across various industries, using his extensive experience and exceptional problem-solving abilities.
Read more
  • 0
  • 0
  • 1202

article-image-how-to-integrate-ai-into-software-development-teams
Anderson Soares Furtado Oliveira
21 Nov 2024
15 min read
Save for later

How to Integrate AI into Software Development Teams

Anderson Soares Furtado Oliveira
21 Nov 2024
15 min read
This article is an excerpt from the book, "​AI Strategies for Web Development", by Anderson Soares Furtado Oliveira. Embark on an enlightening AI journey by understanding its role and its fundamentals, crafting cutting-edge applications, and navigating ethical challenges. You’ll also explore strategic tools and gain foresight into future trends.IntroductionIntegrating AI into software development teams is no longer a futuristic concept; it is a strategic necessity in today's digital era. AI has the potential to revolutionize software development by optimizing processes, solving complex problems, improving user experience, and driving business value. However, harnessing the power of AI requires more than just adopting new tools—it demands a shift in mindset, processes, skills, and team culture. In this article, we explore actionable strategies for software engineering leaders to successfully integrate AI into their teams, drawing from Gartner’s recommendations and industry best practices. From fostering collaboration and upskilling teams to implementing data pipelines and AI solutions, these steps will help organizations fully leverage AI's transformative potential.How to integrate AI into software development teamsAI is a technology that can transform the way we create and use software applications. It can help us solve complex problems, optimize processes, improve UX, and generate value for businesses. However, for us to fully leverage the potential of AI, it needs to be effectively integrated into software development teams. In this section, we will present some actions that software engineering leaders should consider so that they can achieve this goal, based on Gartner’s recommendations (https://www.gartner. com/en/articles/set-up-now-for-ai-to-augment-software-development).Let’s start:Adopt an AI mindset from the start: The first action is to adopt an AI mindset from the start of the project, encouraging the exploration of AI techniques to improve application development. This means that developers should be open to learning about the possibilities and challenges of AI and seek innovative solutions that use this technology. In addition, leaders should set clear and measurable goals for the use of AI and align expectations with project stakeholders. So, encourage teams to explore AI by initiating projects that directly involve AI technologies. For instance, a development team could be tasked with creating a chatbot to streamline customer service interactions, encouraging them to learn and apply NLP techniques.Provide a framework to identify AI opportunities: The second action is to provide a framework to identify when and where AI can yield better results. This involves analyzing the needs and requirements of the project, and assessing whether AI can offer benefits in terms of quality, efficiency, scalability, security, or other aspects. It is also important to consider the costs and risks associated with implementing AI and compare them with available alternatives. The framework should guide developers in choosing the most suitable AI techniques for each case, such as ML, NLP, and computer vision. Develop a decision matrix to help identify opportunities for AI integration that can enhance project outcomes. This matrix could evaluate factors such as potential improvements in efficiency and quality against the costs and complexity of implementing AI solutions, helping to pinpoint where tools such as ML could be most beneficial.Invest in dedicated AI solutions: The third action is to invest in dedicated AI solutions to support various roles and tasks in software engineering. These solutions can be tools, platforms, services, or libraries that use AI to facilitate or automate activities such as design, coding, testing, debugging, integration, deployment, and monitoring. These solutions can increase the productivity, quality, and creativity of developers, as well as reduce errors and rework. Some examples of AI solutions for software engineering are intelligent assistants, code generators, code analyzers, and automatic testers. For example, implementing platforms such as TensorFlow or PyTorch for ML projects can aid in tasks ranging from predictive analytics to automated testing, thus boosting productivity and reducing the likelihood of errors.Expand the data engineering pipeline: The fourth action is to expand the data engineering pipeline to leverage AI enrichment and enable intelligent applications. Th is means that developers should collect, store, process, analyze, and visualize data efficiently and securely, using AI to extract insights and value from data. In addition, developers should integrate the data with AI models, and use these models to provide intelligent features to applications, such as recommendations, customizations, predictions, and detections. Intelligent applications can improve performance, usability, and end-user satisfaction. By integrating comprehensive data management tools such as Apache Kafka for real-time data streaming and processing, teams can enhance their applications with features such as real-time analytics and dynamic UX customization.Foster collaboration between development and model-building teams: The fifth action is to foster collaboration between development teams and model-building teams to avoid overlapping responsibilities and ensure smooth deployment. This involves creating a culture of collaboration and communication, where both teams understand their roles and responsibilities, and work together to implement AI solutions. This can help avoid conflicts, reduce delays, and ensure that the AI models are correctly integrated into the soft ware applications. Establish regular sync-up meetings between software developers and AI model builders to ensure alignment and seamless integration of AI capabilities into applications. These meetings can help clarify responsibilities, share insights, and quicken the pace of development.Continuously train and upskill the team: The sixth action is to continuously train and upskill the team in AI technologies. This involves providing regular training sessions, workshops, and resources to help developers learn about the latest AI techniques and tools. It also involves creating a learning culture, where developers are encouraged to learn and share their knowledge with others. This can help to build a team of skilled AI practitioners, who can effectively use AI to improve software development. Create ongoing educational programs and provide access to courses from platforms such as Coursera or Udemy that cover advanced AI topics. Encouraging participation in hackathons or internal projects focused on AI can also foster practical experience and innovation.Effectively integrating AI into software development teams is a complex task that requires a strategic and diligent approach. It’s not just about adopting new tools or technologies but transforming the mindset, processes, skills, and culture of the team. To navigate this transformation successfully, a structured checklist can serve as a valuable guide, ensuring that every critical aspect is addressed systematically:1. Assessment and planning:Identify objectives: Define clear objectives for integrating AI into your development processes. Determine what problems you aim to solve or what improvements you want to achieve.Evaluate readiness: Assess your team’s current capabilities, infrastructure, and tools to determine readiness for AI integration. Stakeholder alignment: Ensure all stakeholders understand the benefits and implications of AI integration. Secure their support and alignment with the project goals.2. Data collection and management:   Identify data sources: Determine the types of data that will be valuable for AI-driven insights (e.g., source code data, user interaction data, performance data).   Set up data pipelines: Implement data pipelines using tools such as Apache Kafka for real-time data collection and streaming.   Ensure data quality: Establish processes for data cleaning, normalization, and validation to maintain high data quality.3. Infrastructure and tools:Select AI tools: Choose appropriate AI-powered tools for different stages of the development process, such as GitHub Copilot for code generation, Testim for automated testing, and Dynatrace for performance monitoring.Scalable storage solutions: Implement scalable storage solutions such as Amazon S3 or Google Cloud Storage to handle large volumes of data.Processing frameworks: Utilize data processing frameworks such as Apache Spark or Flink for efficient data processing.4. Model development and integration:Build AI models: Use ML frameworks such as TensorFlow, PyTorch, and scikit-learn to develop AI models that can analyze data and generate insights.Integrate AI models: Integrate AI models into your development environment to provide intelligent features such as code suggestions, anomaly detection, and predictive analytics.5. Testing and validation:Automated testing tools: Implement AI-powered automated testing tools such as Testim to create and maintain test cases, ensuring the software remains robust and error-free.Continuous integration: Set up continuous integration (CI) pipelines to automatically run tests and validate code changes.Performance monitoring: Use tools such as New Relic AI and Dynatrace to monitor application performance and detect issues in real-time.6. Security and compliance:Vulnerability scanning: Use AI-powered security tools such as Snyk and Veracode to identify and fix vulnerabilities in the code. Compliance checks: Ensure that AI models and data processing adhere to relevant regulations and standards, such as General Data Protection Regulation (GDPR).7. Deployment and maintenance:Automated deployment: Set up automated deployment pipelines to streamline the release process.Real-time monitoring: Continuously monitor the application in production using tools such as Amazon CloudWatch and Splunk for anomaly detection.Feedback loop: Establish a feedback loop to collect user feedback and performance data, using this information to continuously improve the AI models and development processes.By following these actions, software engineering leaders can effectively integrate AI into their teams and leverage its potential to create innovative, high-quality, and intelligent software applications. This can lead to significant improvements in productivity, quality, creativity, and user satisfaction, as well as provide a competitive edge in today’s increasingly digital and data-driven market.However, it’s important to remember that AI is just a tool that can help solve problems and generate value. The ultimate success of the project depends on the team’s ability to understand user needs, create effective and innovative solutions, and deliver high-quality software. Therefore, AI should be integrated in a way that supports and enhances these goals, rather than replacing them.ConclusionIntegrating AI into software development teams is a multifaceted process that goes beyond adopting cutting-edge tools. It involves fostering a culture of collaboration, continuous learning, and innovation, as well as ensuring robust data management, security, and compliance frameworks. By following a structured approach—starting with clear objectives and readiness assessments, implementing advanced tools and frameworks, and maintaining continuous validation and feedback loops—software engineering leaders can unlock AI's full potential. This integration will not only enhance productivity and quality but also empower teams to create intelligent, high-performing applications that meet user needs and provide a competitive edge. Ultimately, AI should be a powerful enabler, complementing human creativity and expertise to deliver software solutions that truly excel.Author BioAnderson Soares Furtado Oliveira is an experienced executive, AI strategist, and machine learning engineer specializing in AI governance, risk management, and compliance. As a board member at The Global Center for Risk and Innovation (GCRI) and an AI strategy consultant at G³ AI Global, he co-authored the book PgM Canvas: Transforming Vision into Real Benefits - A Program Management Guide for Leaders and Managers. With over a decade of experience in IT governance (CGEIT) and a focus on integrating AI technologies to drive business growth, he has led numerous AI projects and developed AI governance frameworks. His expertise in digital transformation and national development has equipped him to create innovative solutions and ethical AI applications. Anderson is a PhD student in Computer Science and Computational Mathematics at the University of São Paulo and holds an MBA in Software Engineering Project Management.
Read more
  • 0
  • 0
  • 1034

article-image-building-efficient-web-apis-with-net-8-and-visual-studio-2022
Jonathan R. Danylko
30 Oct 2024
15 min read
Save for later

Building Efficient Web APIs with .NET 8 and Visual Studio 2022

Jonathan R. Danylko
30 Oct 2024
15 min read
This article is an excerpt from the book, ASP.NET 8 Best Practices, by Jonathan R. Danylko. With the latest version of .NET 8.0 Core in LTS (Long-Term-Support), best practices are becoming harder to find as the technology continues to evolve. This book will guide you through coding practices and various aspects of software development.Introduction In the ever-evolving landscape of web development, .NET 8 has emerged as a game-changer, especially in the realm of Web APIs. With new features and enhancements, .NET 8 prioritizes the ease and efficiency of building Web APIs, supported by robust tools in Visual Studio 2022. This chapter explores the innovations in .NET 8, focusing on creating and testing Web APIs seamlessly. From leveraging minimal APIs to utilizing Visual Studio's new features, developers can now build powerful REST-based services with simplicity and speed. We'll guide you through the process, demonstrating how to create a minimal API and highlighting the benefits of this approach. Technical requirements In .NET 8, Web APIs take a front seat. Visual Studio has added new features to make Web APIs easier to build and test. For this chapter, we recommend using Visual Studio 2022, but the only requirement to view the GitHub repository is a simple text editor. The code for Chapter 09 is located in Packt Publishing’s GitHub repository, found at https:// github.com/PacktPublishing/ASP.NET-Core-8-Best-Practices. Creating APIs quickly With .NET 8, APIs are integrated into the framework, making it easier to create, test, and document. In this section, we’ll learn a quick and easy way to create a minimal API using Visual Studio 2022 and walk through the code it generates. We’ll also learn why minimal APIs are the best approach to building REST-based services. Using Visual Studio One of the features of .NET 8 is the ability to create minimal R EST APIs extremely fast. One way is to use the dotnet command-line tool and the other way is to use Visual Studio. To do so, follow these steps: 1. Open Visual Studio 2022 and create an ASP.NET Core Web API project. 2. After selecting the directory for the project, click Next. 3. Under the project options, make the following changes: Uncheck the Use Controllers option to use minimal APIs Check Enable OpenAPI support to include support for API documentation using Swagger:  Figure 9.1 – Options for a web minimal API project 4. Click Create. That’s it – we have a simple API! It may not be much of one, but it’s still a complete API with Swagger documentation. Swagger is a tool for creating documentation for APIs and implementing the OpenAPI specification, whereas Swashbuckle is a NuGet package that uses Swagger for implementing Microsoft  APIs. If we look at the project, there’s a single file called Program.cs. Opening Program.cs will show the entire application. This is one of the strong points of .NET – the ability to create a scaffolded REST API relatively quickly: var builder = WebApplication.CreateBuilder(args); // Add services to the container. // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; app.MapGet("/weatherforecast", () => { var forecast = Enumerable.Range(1, 5).Select(index => new WeatherForecast ( DateOnly.FromDateTime(DateTime.Now.AddDays (index)), Random.Shared.Next(-20, 55), summaries[Random.Shared.Next( summaries.Length)] )) .ToArray(); return forecast; }) .WithName("GetWeatherForecast") .WithOpenApi(); app.Run(); internal record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary) { public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); } In the preceding code, we created our “application” through the .CreateBuilder() method. We also added the EndpointsAPIExplorer and SwaggerGen services. EndpointsAPIExplorer enables the developer to view all endpoints in Visual Studio, which we’ll cover later. The SwaggerGen service, on the other hand, creates the documentation for the API when accessed through the browser. The next line creates our application instance using the .Build() method. Once we have our app instance and we are in development mode, we can add Swagger and the Swagger UI. .UseHttpsRedirection() is meant to redirect to HTTPS when the protocol of a web page is HTTP to make the API secure. The next line creates our GET weatherforecast route using .MapGet(). We added the .WithName() and .WithOpenApi() methods to identify the primary method to call and let .NET know it uses the OpenAPI standard, respectively. Finally, we called app.Run(). If we run the application, we will see the documented  API on how to use our API and what’s available. Running the application produces the following output:  Figure 9.2 – Screenshot of our documented Web API If we call the /weatherforecast API, we see that we receive JSON back with a 200 HTTP status.  Figure 9.3 – Results of our /weatherforecast API Think of this small  API as middleware with API controllers combined into one compact file (Program. cs).  Why minimal APIs? I consider minimal APIs to be a feature in .NET 8, even though it’s a language concept. If the application is extremely large, adding minimal APIs should be an appealing feature in four ways: Self-contained: Simple API functionality inside one file is easy to follow for other developers Performance: Since we aren’t using controllers, the MVC overhead isn’t necessary when using these APIs Cross-platform: With .NET, APIs can now be deployed on any platform Self-documenting: While we can add Swashbuckle to other APIs, it also builds the documentation for minimal APIs Moving forward, we’ll take these minimal APIs and start looking at Visual Studio’s testing capabilities. Conclusion In conclusion, .NET 8 has revolutionized the process of building Web APIs by integrating them more deeply into the framework, making it easier than ever to create, test, and document APIs. By harnessing the power of Visual Studio 2022, developers can quickly set up minimal APIs, offering a streamlined and efficient approach to building REST-based services. The advantages of minimal APIs—being self-contained, performant, cross-platform, and self-documenting—make them an invaluable tool in a developer's arsenal. As we continue to explore the capabilities of .NET 8, the potential for creating robust and scalable web applications is limitless, paving the way for innovative and efficient software solutions. Author BioJonathan "JD" Danylko is an award-winning, full-stack ASP.NET architect. He's used ASP.NET as his primary way to build websites since 2002 and before that, Classic ASP.Jonathan contributes to his blog (DanylkoWeb.com) on a weekly basis, has built a custom CMS, is a founder of Tuxboard (an open-source ASP.NET dashboard library), has been on various podcasts, and guest posted on the C# Advent Calendar for 6 years. Jonathan has worked in various industries for small, medium, and Fortune 100 companies, but currently works as an Architect at Insight Enterprise. The best way to contact Jonathan is through GitHub, LinkedIn, Twitter, email, or through the website.
Read more
  • 0
  • 0
  • 912

article-image-effortless-web-deployment-a-guide-to-deploying-your-application-on-netlify
Ekene Eze
30 Oct 2024
10 min read
Save for later

Effortless Web Deployment: A Guide to Deploying Your Application on Netlify

Ekene Eze
30 Oct 2024
10 min read
This article is an excerpt from the book, Web Development on Netlify, by Ekene Eze. This book is a comprehensive guide to deploying and scaling frontend web applications on Netlify. With hands-on instructions and real-world examples, this book takes you from setting up a Netlify account and deploying web apps to optimizing performance.Introduction Deploying a web application can sometimes be a daunting task, especially with the various methods and tools available. In this article, we'll explore two straightforward deployment methods offered by Netlify: the drag-and-drop method, which is beginner-friendly and ideal for static sites, and the Netlify CLI (Netlify Dev) method, which provides greater control for developers who prefer using the command line.  Deploying your web application on Netlify We will discuss two deployment methods in this chapter: the drag-and-drop method and the Netlify CLI (Netlify Dev) m ethod. A third method, the Git-based method, was covered in the Connecting to a Git repository section in Chapter 1. Netlify drag-and-drop deployment The drag-and-drop deployment method is the most straightforward and beginner-friendly way to deploy a web application on Netlify. Th is method is suitable for static websites or applications that do not require complex build processes. To deploy your web application on Netlify using the drag-and-drop method, follow these steps: 1. Organize your project files and ensure your project’s index.html file is in the root folder so  that Netlify can easily find it and build your site from there:  Figure 2.1 – Netlify drop sample structure 2. Visit netlify.com and sign in or create an account. 3. On your Netlify dashboard, locate the Sites section. Drag and drop your project folder into the designated area. Netlify will automatically upload your files, create a new site, deploy it, and assign a randomly generated  URL. You can click on the generated URL to view your live site. 4. Optionally, configure your site.  To configure your site’s settings, such as adding a custom domain or enabling SSL, click the Site settings button. We will discuss these configuration options in greater detail later, in the Configuring settings and o options section. Netlify CLI (Netlify Dev) deployment The Netlify CLI deployment method offers greater control over the deployment process for developers who prefer using the command line. Follow these steps to deploy your web applications to Netlify using the Netlify CLI: 1. Install the Netlify CLI globally on your computer using npm: npm install -g netlify-cli 2. Run the following command to authenticate your Netlify account: netlify login Your browser will open so that you can authorize access to your Netlify account. 3. Navigate to your project folder in the command line and run the following command to initialize a new Netlify site: netlify init 4. You will be prompted to choose between connecting an existing Git repository or creating a new site without a Git repository. Choose the option that best suits your needs. Connecting to a Git repository enables continuous deployment. 4. If your project requires specifi c build settings, open the automatically created netlify.toml fi le in your project’s root directory and confi gure the settings accordingly. Here’s an example: toml [build] command = "npm run build" publish = "dist" This configuration would run the npm run build command and deploy the dist folder as the publish directory. Run the following command in your project directory to deploy your site: netlify deploy By default, this command creates a draft deployment. Preview the draft by visiting the generated URL. 7. If you are satisfied with the draft deployment, run the following  command for a production deployment: netlify deploy --prod This will create a production deployment with a randomly generated URL. 8. Visit your Netlify dashboard to view your live site or configure your site’s settings, such as adding a custom domain or enabling SSL. This step will be covered in more detail in the Configuring settings and options section of this chapter. Git-based deployment Refer to Chapter 1 for the Git-based deployment process. Choosing a deployment pattern Need help choosing a pattern for your needs? Here’s a tabular comparison of the three deployment patterns offered by Netlify: Git-based deployments, CLI deployments, and drag-and-drop: Deployment PatternWhen to ChooseKey BenefitsGit-based deployments Ideal for collaborative development Version control, automated builds, code reviewCLI deployments Ideal for advanced automation scenarios Scripted deployments, custom workflowsDrag-and-drop deployments Ideal for simple, non-technical usersUser-friendly, visual interface, quick deploymentsTable –  Choosing a deployment pattern Now, let’s discuss when each deployment pattern is ideal and why: Git-based deployments: Git-based deployments are suitable for collaborative development environments where multiple team members contribute to the code base. It is ideal when you want to leverage the power of version control systems such as Git. Git-based deployments offer version control, which allows you to track changes, collaborate with others, and roll back to previous versions if needed. They also enable automated builds triggered by changes to the repository, facilitating continuous integration and deployment workflows. Code review processes can be integrated into the deployment pipeline, ensuring code quality. CLI deployments: CLI deployments are ideal for advanced automation scenarios, where you require fine-grained control over the deployment process and want to integrate it with custom scripts or workflows. CLI deployments off er fl exibility and programmability. They allow you to script deployments using command-line tools, which can be useful for automating complex deployment scenarios. You can customize and extend the deployment process to fit your requirements while integrating with other tools or services. Drag-and-drop deployments: Drag-and-drop deployments are ideal for non-technical users or individuals who prefer a simple, user-friendly interface for deploying static sites or applications quickly. Drag-and-drop deployments provide a visual interface that simplifies the deployment process. Users can simply drag and drop their site files or assets onto the Netlify web interface, and the platform takes care of the deployment and hosting. This pattern eliminates the need for technical knowledge or command-line usage, making it accessible to a wider range of users. The choice of deployment pattern depends on your specific needs and your technical expertise. Git-based deployments are suitable for collaborative development, CLI deployments offer advanced automation capabilities, and drag-and-drop deployments are ideal for non-technical users seeking a simple interface. Understanding the strengths and trade-offs of each pattern will help you select the most appropriate deployment approach for your project. ConclusionChoosing the right deployment method is crucial for the success and efficiency of your web application. Whether you opt for the simplicity of the drag-and-drop method, the command-line control of the Netlify CLI, or the collaborative advantages of Git-based deployments, each approach has its unique strengths. The drag-and-drop method offers a quick and easy solution for non-technical users, while the CLI method provides advanced automation capabilities for more complex scenarios. Git-based deployments, on the other hand, are perfect for teams working in a collaborative environment with a need for version control. By understanding these methods and their respective benefits, you can confidently deploy your web application on Netlify using the approach that best aligns with your goals and expertise. Author BioEkene Eze is a highly experienced Developer Advocate with over five years of professional experience in leading DevRel teams across multiple organizations. As a former member of the Developer Experience team at Netlify, he played a key role in helping numerous companies integrate and effectively utilize the Netlify platform. As a well-regarded speaker, he is dedicated to sharing his knowledge and expertise with the wider development community through a variety of mediums, including blog posts, video tutorials, live streams, and podcasts. Currently serving as the Director of Developer Relations at Abridged Inc, the author brings a wealth of experience and expertise to this comprehensive guide on scaling web applications with Netlify.
Read more
  • 0
  • 0
  • 617

article-image-how-to-install-ruby-on-rails-a-comprehensive-guide-for-macos-windows-and-linux
Bernard Pineda
25 Oct 2024
10 min read
Save for later

How to Install Ruby on Rails: A Comprehensive Guide for macOS, Windows, and Linux

Bernard Pineda
25 Oct 2024
10 min read
This article is an excerpt from the book, From PHP to Ruby on Rails, by Bernard Pineda. This book will help you adopt the Ruby mindset and get to grips with Ruby-related concepts. You'll learn about setting up your local environment, Ruby syntax, popular frameworks, and more. A language-agnostic approach will help you avoid common pitfalls and start integrating Ruby into your projects. Introduction Just like the libraries we’ve seen so far, Rails is an open source gem. It behaves a little differently than the gems we’ve seen so far as it uses many dependencies and can generate code examples, but at the end of the day, it’s still a gem. This means that we can either install it by itself, or we can include it in a Gemfile. For this section, we will have to divide the process into three separate sections – macOS installation, Windows installation, and Linux installation – as each operating system behaves differently. Installing Ruby on Rails on macOS The first step of setting up our local environment is to install rbenv. For most Mac installations, brew will simplify this process. Let’s get started with the steps: 1. Let’s open a shell and run the following command: brew install rbenv 2. This should install the rbenv program. Now, you’ll need to add the following line to your bash profile: eval "$(rbenv init -)" 3. Once you’ve added this line to your profile, you should activate the change by either opening a new shell or running the following command: source ~/.bash_profile Note that this command may differ if you’re using another shell, such as zsh or fish. 4. With rbenv installed, we need to install Ruby 2.6.10 with the following command: rbenv install 2.6.10 5. Once Ruby 2.6.10 has been installed, we must set the default Ruby version with the following command: rbenv global 2.6.10 6. Now, we need to install the program to manage gems, called bundler. Let’s install it with the following command: gem install bundler With that, our environment is ready for the next steps in this chapter. If you wish to see more details about this installation, please refer to the following web page: https:// www.digitalocean.com/community/tutorials/how-to-install-ruby-onrails-with-rbenv-on-macos. Installing Ruby on Rails on Windows Follow these steps to install Ruby on Rails on Windows: 1. To set up our local environment, first, we must install Git for Windows. We can download the package from https://gitforwindows.org/. Once downloaded, we can run the installer; it should open the installer application:  Figure 7.1 – Git installer You can safely accept the default options unless you want to change any of the specific behavior from Git. At the end of the installation process, you may just deselect all the options of the wizard and move on to the next step:  Figure 7.2 – Git finalized installation We will also need the Git SDK installed for some dependencies that Ruby on Rails requires.  We can get the installer from https://github.com/git-for-windows/buildextra/releases/tag/git-sdk-1.0.8. Be careful and select the correct option for your platform (32 or 64 bits). In my case, I had to choose 64 bits, so I downloaded the git-sdk-installer-1.0.8.0-64.7z.exe binary:  Figure 7.3 – Git SDK download Once this package has been downloaded, run it; we will be asked where we want the Git SDK to be installed. The default option is fine (C:\git-sdk-64):  Figure 7.4 – Git SDK installation location This package might take a while to complete as it has to download other additional packages but it will do so on its own. Please be patient. Once this package has finished installing the SDK, it will open a Git Bash console, which looks similar to Windows PowerShell. We can close this Git Bash console window and open another Windows PowerShell. Once we have the new window open, we must type the following command: new-item -type file -path $profile -force This command will help us create a Windows PowerShell profile, which will allow us to execute commands every time we open a Windows PowerShell console. Once we’ve run the previous command, we may also close the Windows PowerShell window, and move on to the next step. At this point, we will install rbenv, which allows us to install multiple versions of Ruby. However, this program wasn’t created for Windows, so its installation is a little different than in other operating systems. Let’s open a browser and go to the rbenv for Windows web page: https://github.com/ ccmywish/rbenv-for-windows. On that page, we will find instructions on how to install rbenv, which we will do now. Let’s open a new Windows PowerShell and type the following command: $env:RBENV_ROOT = "C:\Ruby-on-Windows " This command will set a special environment variable that will be used for the rbenv installation. 6. Once we’ve run this command, we must download the rest of the required files with the following command: iwr -useb "https://github.com/ccmywish/rbenv-for-windows/raw/ main/tools/install.ps1" | iex 7. Once this command has finished downloading the files from GitHub, modify the user’s profile with the following command from within the Windows PowerShell: notepad $profile This will open the Notepad application and the profile we previously set. On the rbenv-for-windows web page, we can see what the content of the file should be. Let’s add it with Notepad so that the profile file now looks like this:  $env:RBENV_ROOT = "C:\Ruby-on-Windows" & "$env:RBENV_ROOT\rbenv\bin\rbenv.ps1" initSave and close Notepad, and close all Windows PowerShell windows that we may have open. We should open a new Windows PowerShell to make these changes take effect. As this is the first time rbenv is running, our console will automatically install a default Ruby version. This might take a while and will put our patience to the test. Once the process has finished, we should see an output similar to this one:  Figure 7.5 – rbenv post-installation script Now, we are ready to install other versions of Ruby. For Ruby on Rails 5, we will install Ruby 2.6.10. Let’s install it by running the following command on the same Windows Powershell window that we just opened: rbenv install 2.6.10 The program will ask us whether we want to install the Lite version or the Full version. Choose the Full version. Once again, this might take a while, so please be patient. Once this command has finished running, we must set this Ruby version for our whole system.  We can do this by running the following command: rbenv global 2.6.10 To confirm that this version of Ruby has been installed and enabled, use the following command: ruby --version This should give us the following output: ruby 2.6.10-1 (set by C: \Ruby-on-Windows \global.txt) 12. Ruby needs a program called bundler to manage all the dependencies on our system. So, let’s install this program with the following command: gem install bundler 13. Once this gem has been installed, we must update the RubyGem system with the following command:  gem update –-system 3.2.3 This command will also take a while to compute, but once it’s finished, we will be ready to use Ruby on Rails on Windows. Next, let’s see the steps for installing Ruby on Rails on Linux. Installing Ruby on Rails on Linux For Ubuntu and Debian Linux distributions, we must also install rbenv and the dependencies necessary for Ruby on Rails to run correctly: 1. Let’s start by opening a terminal and running the following command: sudo apt update 2. Once this command has finished updating apt, we must install our dependencies for Ruby, Ruby on Rails, and some gems that require compiling. We’ll do so by running the following command: sudo apt install git curl libssl-dev libreadline-dev zlib1gdev autoconf bison build-essential libyaml-dev libreadline-dev libncurses5-dev libffi-dev libgdbm-dev pkg-config sqlite3 nodejs This command might take a while. Once it has finished running, we can install rbenv with the following command: curl -fsSL https://github.com/rbenv/rbenv-installer/raw/HEAD/ bin/rbenv-installer | bash We should add rbenv to our $PATH. Let’s do so by running the following command: echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc Now, let’s add the initialize command for rbenv to our bash profile with the following command: echo 'eval "$(rbenv init -)"' >> ~/.bashrc Next, run the bash profile with the following command: source ~/.bashrc 7. This command will (among other things) make the rbenv executable available to us. Now, we can install Ruby 2.6.10 on our system with the following command: rbenv install 2.6.10 This command might take a little while as it installs openssl and that process will take some time. Once this command has finished installing Ruby 2.6.10, we need to set it as the default Ruby version for the whole machine. We can do so by running the following command: rbenv global 2.6.10 We can confirm that this version of Ruby has been installed by running the following command: ruby --version This will result in the following output: ruby 2.6.10p210 (2022-04-12 revision 67958) [x86_64-linux] Ruby needs a program called bundler to manage all the dependencies on our system. So, let’s install this program with the following command: gem install bundler Once this gem has been installed, we can update the RubyGems system with the following command:  gem update –-system 3.2.3 This command will also take a while to compute, but once it’s finished, we will be ready to use Ruby on Rails on Linux. For other Linux distributions and other operating systems, please refer to the official Ruby-lang page: https://www.ruby-lang.org/en/documentation/installation/. Conclusion In conclusion, installing Ruby on Rails varies across operating systems, but the general process involves setting up a version manager like rbenv, installing Ruby, and then using Bundler to manage gems. Whether you're on macOS, Windows, or Linux, each system has specific steps to ensure Rails and its dependencies run smoothly. By following the detailed instructions for your platform, you'll have your development environment ready for Rails in no time. For further guidance and platform-specific nuances, refer to the official documentation and resources linked throughout this guide. Author BioBernard Pineda is a seasoned developer with 20 years of web development experience. Proficient in PHP, Ruby, Python, and other backend technologies, he has taught PHP and PHP-based frameworks through video courses on platforms like LinkedIn Learning. His extensive work with Ruby and Ruby on Rails, along with curiosity in frontend development and game development, bring a diverse perspective to this book. Currently working as a Site Reliability Engineer in Silicon Valley, Bernard is always seeking new adventures.
Read more
  • 0
  • 0
  • 587

article-image-learning-essential-linux-commands-for-navigating-the-shell-effectively
Expert Network
16 Aug 2021
9 min read
Save for later

Learning Essential Linux Commands for Navigating the Shell Effectively 

Expert Network
16 Aug 2021
9 min read
Once we learn how to deploy an Ubuntu server, how to manage users, and how to manage software packages, we should take a moment to learn some important concepts and commands that will allow us to build more of the foundational knowledge that will serve us well while understanding the advanced concepts and treading the path of expertise. These foundational concepts include core Linux commands for navigating the shell.  This article is an excerpt from the book, Mastering Ubuntu Server, Third Edition by Jeremy “Jay” La Croix – A hands-on book that will teach you how to deploy, maintain and troubleshoot Ubuntu Server.    Learning essential Linux commands Building a solid competency on the command line is essential and effectively gives any system administrator or engineer superpowers. Our new abilities won’t allow us to leap tall buildings in a single bound, but will definitely enable us to execute terminal commands as if we’re ninjas. While we won’t master the art of using the command line in this section (that can only come with years and experience), we will definitely become more confident.  First, let’s talk about moving from one place to another within the Linux filesystem. Specifically, by “Linux filesystem”, I’m referring to the default structure of the various folders (also referred to as “directories”) contained within your Ubuntu installation. The Linux filesystem contains many important directories, each with their own designated purpose, which we’ll talk about in more detail in the book. Before we can explore that further, we’ll need to learn how to navigate from one directory to another. The first command we’ll cover in this section relative to navigating the filesystem will clarify the directory you’re currently working from. For that, we have the pwd command. The pwd command pwd stands for print working directory, and shows you where you currently are in the filesystem. If you run it, you may see output such as this:  Figure 4.1: Viewing the current working directory  In this example, when I ran pwd, the output informed me that my current working directory is /home/jay. This is known as your home directory and, by default, every user has one. This is where all the files for your user account will reside by default. Sure, you can create files anywhere you’d like, even outside your home directory if you have permission to do so or you use sudo. But just because you can doesn’t mean you should. As you’ll learn in this article, the Linux filesystem has a designated place for just about everything. But your home directory, located at /home/<username>, is yours. You own it, you control it—it’s your home on the server. In the early 2000s, Linux installations with a graphical user interface even depicted your home directory with an icon of a house.  Typically, files that you create in your home directory will have permission string similar to this:  -rw-rw-r-- 1 jay  jay      0 Jul  5 14:10 testfile.txt  You can see by default, files you create in your home directory are owned by your user, your group, and are readable by all three categories (user, group, and other).  The cd command To change our current directory and navigate to another, we can use the cd command along with a path we’d like to move to:  cd /etc  Now, I haven’t gone over the file and directory layout yet, so I just randomly picked the etc directory. The forward slash at the beginning designates the beginning of the filesystem. More on that later. Now, we’re in the /etc directory, and our command prompt has even changed as well:  Figure 4.2: Command prompt and pwd command after changing a directory  As you could probably guess, the cd command stands for change directory, and it’s how you move your working directory from one to another while navigating around. You can use the following command, for example, to return back to the home directory:  cd /home/<user>  In fact, there are several ways to return home, a few of which are demonstrated in the following screenshot:    Figure 4.3: Other ways of navigating to the home directory  The first command, cd -, doesn’t actually have anything to do with your home directory specifically. It’s a neat trick to return you to whatever directory you were in most previously. For me, the cd – command took me to the previous directory I was just in, which just so happened to be /home/jay. The second command, cd /home/jay, took me directly to my home directory since I called out the entire path. The last command, cd ~, also took me to my home directory. This is because ~ is shorthand for the full path to your home directory, so you don’t really ever have to type out the entire path to /home/<user>. You can just refer to that path simply as ~.  The ls command Another essential command is ls. The ls command lists the contents of the current working directory. We probably don’t have any contents in our home directory yet. But if we navigate to /etc by running cd /etc, as we did earlier, and then execute ls, we’ll see that the /etc</span> directory has a number of files in it. Go ahead and try it yourself and see:  cd /etc ls  We didn’t actually have to change our working directory to /etc just to list the contents. We could’ve just executed the following command:  ls /etc  Even better, we can run:  ls -l /etc  This gives us the contents in a long list, which I think is much easier to understand. It will show each directory or file entry on its own line, along with the permission string. But you probably already must be knowing ls as well as ls -l so I won’t go into too much more detail here. The -l portion of the ls command in that example is known as an argument. I’m not referring to an argument such as the ever-ensuing debate in the Linux community over which command-line text editor is the best between vim and emacs (it’s clearly vim). Instead, I’m referring to the concept of an argument in shell commands that allow you to override the defaults, or feed options to the command in some way, such as in this example, where we format the output of ls to be in a long list.  The rm command The rm command is another one that we touched on in, when we were discussing manually removing the home directory of a user that was removed from the system. So, at this point, you’re probably well aware of that command and what it does (it removes files and directories). It’s a potentially dangerous command, as you could use it to accidentally remove something that you shouldn’t have. We used the following command to remove the home directory of user dscully:  rm -r /home/dscully  As you can see, we’re using the -r argument to alter the behavior of the rm command, which, by default, doesn’t remove directories but only files. The -r argument instructs rm to remove everything recursively, even if it’s a directory. The -r argument will also remove subdirectories of the path as well, so you’ll definitely want to be careful with this command. As I’ve mentioned earlier in the book, if you use sudo with rm, you can hypothetically delete your entire Ubuntu installation!  Another option offered by rm is the -f argument which is short for force, and it tells rm not to prompt before removing things. This argument won’t be needed as often, and use cases for it are outside the scope of this article. But keep in mind that it exists, should you need it.  The touch command Another foundational command that’s good to know is touch, which actually serves two purposes. First, assuming you have permission to do so in your current working directory, the touch command will create an empty file if it doesn’t already exist. Second, the touch command will update the modification time of a file or directory if it does already exist:  Figure 4.4: Experimenting with the touch command  To illustrate this, in the related screenshot, I ran several commands. First, I ran the following command to create an empty file:  touch testfile.txt  That file didn’t exist before, so when I ran ls -l afterward, it showed the newly created file with a size of 0 bytes. Next, I ran the touch testfile.txt command again a minute later, and you can see in the screenshot that the modification time went from 15:12 to 15:13.  When it comes to viewing the contents of a file, we’ll get to that later on in the book, Mastering Ubuntu Server, Third Edition. And there are definitely more commands that we’ll need to learn to build the basis of our foundation. But for now, let’s take a break from the foundational concepts to understand the Linux filesystem layout better.  Summary There are more Linux commands than you will never be able to memorize. Most of us just memorize our favorite commands and variations of commands. You’ll develop your own menu of these commands as you learn and expand your knowledge. In this article, we covered many of the foundational commands that are, for the most part, essential. Commands such as grep, cat, cd, ls, and others were explored this time around.  About Jeremy “Jay” La Croix is a technologist and open-source enthusiast, specializing in Linux. Jay is currently the director of Cloud Services, Adaptavist. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master’s degree in Information Systems Technology Management from Capella University.     In addition, Jay also has an active Linux-focused YouTube channel with over 186K followers and 15.9M views, available at LearnLinux.tv, where he posts instructional tutorial videos and other Linux-related content.
Read more
  • 0
  • 0
  • 12404
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-gain-practical-expertise-latest-edition-software-architecture-with-c-sharp9-dotnet5
Expert Network
08 Jul 2021
3 min read
Save for later

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5 

Expert Network
08 Jul 2021
3 min read
Software architecture is one of the most discussed topics in the software industry today, and its importance will certainly grow more in the future. But the speed at which new features are added to these software solutions keeps increasing, and new architectural opportunities keep emerging. To strengthen your command on this, Packt brings to you the Second Edition of Software Architecture with C# 9 and .NET 5 by Gabriel Baptista and Francesco Abbruzzese – a fully revised and expanded guide, featuring the latest features of .NET 5 and C# 9.  This book covers the most common design patterns and frameworks involved in modern cloud-based and distributed software architectures. It discusses when and how to use each pattern, by providing you with practical real-world scenarios. This book also presents techniques and processes such as DevOps, microservices, Kubernetes, continuous integration, and cloud computing, so that you can have a best-in-class software solution developed and delivered for your customers.   This book will help you to understand the product that your customer wants from you. It will guide you to deliver and solve the biggest problems you can face during development. It also covers the do's and don'ts that you need to follow when you manage your application in a cloud-based environment. You will learn about different architectural approaches, such as layered architectures, service-oriented architecture, microservices, Single Page Applications, and cloud architecture, and understand how to apply them to specific business requirements.   Finally, you will deploy code in remote environments or on the cloud using Azure. All the concepts in this book will be explained with the help of real-world practical use cases where design principles make the difference when creating safe and robust applications. By the end of the book, you will be able to develop and deliver highly scalable and secure enterprise-ready applications that meet the end customers' business needs.   It is worth mentioning that Software Architecture with C# 9 and .NET 5, Second Edition will not only cover the best practices that a software architect should follow for developing C# and .NET Core solutions, but it will also discuss all the environments that we need to master in order to develop a software product according to the latest trends.   This second edition is improved in code, and adapted to the new opportunities offered by C# 9 and .Net 5. We added all new frameworks and technologies such as gRPC, and Blazor, and described Kubernetes in more detail in a dedicated chapter.   To get the most out of this book, understand it as a guidance that you may want to revisit many times for different circumstances. Do not forget to have Visual Studio Community 2019 or higher installed and be sure that you understand C# .NET principles.
Read more
  • 0
  • 0
  • 8233

article-image-exploring-the-strategy-behavioral-design-pattern-in-node-js
Expert Network
02 Jun 2021
10 min read
Save for later

Exploring the Strategy Behavioral Design Pattern in Node.js

Expert Network
02 Jun 2021
10 min read
A design pattern is a reusable solution to a recurring problem. The term is really broad in its definition and can span multiple domains of an application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90s by the book, Design Patterns: Elements of Reusable Object- Oriented Software, Pearson Education, by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This article is an excerpt from the book Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino – a comprehensive guide for learning proven patterns, techniques, and tricks to take full advantage of the Node.js platform. In this article, we’ll look at the behavior of components in software design. We’ll learn how to combine objects and how to define the way they communicate so that the behavior of the resulting structure becomes extensible, modular, reusable, and adaptable. After introducing all the behavioral design patterns, we will dive deep into the details of the strategy pattern. Now, it's time to roll up your sleeves and get your hands dirty with some behavioral design patterns. Types of Behavioral Design Patterns The Strategy pattern allows us to extract the common parts of a family of closely related components into a component called the context and allows us to define strategy objects that the context can use to implement specific behaviors. The State pattern is a variation of the Strategy pattern where the strategies are used to model the behavior of a component when under different states. The Template pattern, instead, can be considered the "static" version of the Strategy pattern, where the different specific behaviors are implemented as subclasses of the template class, which models the common parts of the algorithm. The Iterator pattern provides us with a common interface to iterate over a collection. It has now become a core pattern in Node.js. JavaScript offers native support for the pattern (with the iterator and iterable protocols). Iterators can be used as an alternative to complex async iteration patterns and even to Node.js streams. The Middleware pattern allows us to define a modular chain of processing steps. This is a very distinctive pattern born from within the Node.js ecosystem. It can be used to preprocess and postprocess data and requests. The Command pattern materializes the information required to execute a routine, allowing such information to be easily transferred, stored, and processed. The Strategy Pattern The Strategy pattern enables an object, called the context, to support variations in its logic by extracting the variable parts into separate, interchangeable objects called strategies. The context implements the common logic of a family of algorithms, while a strategy implements the mutable parts, allowing the context to adapt its behavior depending on different factors, such as an input value, a system configuration, or user preferences. Strategies are usually part of a family of solutions and all of them implement the same interface expected by the context. The following figure shows the situation we just described: Figure 1: General structure of the Strategy pattern Figure 1 shows you how the context object can plug different strategies into its structure as if they were replaceable parts of a piece of machinery. Imagine a car; its tires can be considered its strategy for adapting to different road conditions. We can fit winter tires to go on snowy roads thanks to their studs, while we can decide to fit high-performance tires for traveling mainly on motorways for a long trip. On the one hand, we don't want to change the entire car for this to be possible, and on the other, we don't want a car with eight wheels so that it can go on every possible road. The Strategy pattern is particularly useful in all those situations where supporting variations in the behavior of a component requires complex conditional logic (lots of if...else or switch statements) or mixing different components of the same family. Imagine an object called Order that represents an online order on an e-commerce website. The object has a method called pay() that, as it says, finalizes the order and transfers the funds from the user to the online store. To support different payment systems, we have a couple of options: Use an ..elsestatement in the pay() method to complete the operation based on the chosen payment option Delegate the logic of the payment to a strategy object that implements the logic for the specific payment gateway selected by the user In the first solution, our Order object cannot support other payment methods unless its code is modified. Also, this can become quite complex when the number of payment options grows. Instead, using the Strategy pattern enables the Order object to support a virtually unlimited number of payment methods and keeps its scope limited to only managing the details of the user, the purchased items, and the relative price while delegating the job of completing the payment to another object. Let's now demonstrate this pattern with a simple, realistic example. Multi-format configuration objects Let's consider an object called Config that holds a set of configuration parameters used by an application, such as the database URL, the listening port of the server, and so on. The Config object should be able to provide a simple interface to access these parameters, but also a way to import and export the configuration using persistent storage, such as a file. We want to be able to support different formats to store the configuration, for example, JSON, INI, or YAML. By applying what we learned about the Strategy pattern, we can immediately identify the variable part of the Config object, which is the functionality that allows us to serialize and deserialize the configuration. This is going to be our strategy. Creating a new module Let's create a new module called config.js, and let's define the generic part of our configuration manager: import { promises as fs } from 'fs' import objectPath from 'object-path' export class Config { constructor (formatStrategy) {                           // (1) this.data = {} this.formatStrategy = formatStrategy } get (configPath) {                                       // (2) return objectPath.get(this.data, configPath) } set (configPath, value) {                                // (2) return objectPath.set(this.data, configPath, value) } async load (filePath) {                                  // (3) console.log(`Deserializing from ${filePath}`) this.data = this.formatStrategy.deserialize( await fs.readFile(filePath, 'utf-8') ) } async save (filePath) {                                  // (3) console.log(`Serializing to ${filePath}`) await fs.writeFile(filePath, this.formatStrategy.serialize(this.data)) } } This is what's happening in the preceding code: In the constructor, we create an instance variable called data to hold the configuration data. Then we also store formatStrategy, which represents the component that we will use to parse and serialize the data. We provide two methods, set()and get(), to access the configuration properties using a dotted path notation (for example, property.subProperty) by leveraging a library called object-path (nodejsdp.link/object-path). The load() and save() methods are where we delegate, respectively, the deserialization and serialization of the data to our strategy. This is where the logic of the Config class is altered based on the formatStrategy passed as an input in the constructor. As we can see, this very simple and neat design allows the Config object to seamlessly support different file formats when loading and saving its data. The best part is that the logic to support those various formats is not hardcoded anywhere, so the Config class can adapt without any modification to virtually any file format, given the right strategy. Creating format Strategies To demonstrate this characteristic, let's now create a couple of format strategies in a file called strategies.js. Let's start with a strategy for parsing and serializing data using the INI file format, which is a widely used configuration format (more info about it here: nodejsdp.link/ini-format). For the task, we will use an npm package called ini (nodejsdp.link/ini): import ini from 'ini' export const iniStrategy = { deserialize: data => ini.parse(data), serialize: data => ini.stringify(data) } Nothing really complicated! Our strategy simply implements the agreed interface, so that it can be used by the Config object. Similarly, the next strategy that we are going to create allows us to support the JSON file format, widely used in JavaScript and in the web development ecosystem in general: export const jsonStrategy = { deserialize: data => JSON.parse(data), serialize: data => JSON.stringify(data, null, '  ') } Now, to show you how everything comes together, let's create a file named index.js, and let's try to load and save a sample configuration using different formats: import { Config } from './config.js' import { jsonStrategy, iniStrategy } from './strategies.js' async function main () { const iniConfig = new Config(iniStrategy) await iniConfig.load('samples/conf.ini') iniConfig.set('book.nodejs', 'design patterns') await iniConfig.save('samples/conf_mod.ini') const jsonConfig = new Config(jsonStrategy) await jsonConfig.load('samples/conf.json') jsonConfig.set('book.nodejs', 'design patterns') await jsonConfig.save('samples/conf_mod.json') } main() Our test module reveals the core properties of the Strategy pattern. We defined only one Config class, which implements the common parts of our configuration manager, then, by using different strategies for serializing and deserializing data, we created different Config class instances supporting different file formats. The example we've just seen shows us only one of the possible alternatives that we had for selecting a strategy. Other valid approaches might have been the following: Creating two different strategy families: One for the deserialization and the other for the serialization. This would have allowed reading from a format and saving to another. Dynamically selecting the strategy: Depending on the extension of the file provided; the Config object could have maintained a map extension → strategy and used it to select the right algorithm for the given extension. As we can see, we have several options for selecting the strategy to use, and the right one only depends on your requirements and the tradeoff in terms of features and the simplicity you want to obtain. Furthermore, the implementation of the pattern itself can vary a lot as well. For example, in its simplest form, the context and the strategy can both be simple functions: function context(strategy) {...} Even though this may seem insignificant, it should not be underestimated in a programming language such as JavaScript, where functions are first-class citizens and used as much as fully-fledged objects. Between all these variations, though, what does not change is the idea behind the pattern; as always, the implementation can slightly change but the core concepts that drive the pattern are always the same. Summary In this article, we dive deep into the details of the strategy pattern, one of the Behavioral Design Patterns in Node.js. Learn more in the book, Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino. About the Authors Mario Casciaro is a software engineer and entrepreneur. Mario worked at IBM for a number of years, first in Rome, then in Dublin Software Lab. He currently splits his time between Var7 Technologies-his own software company-and his role as lead engineer at D4H Technologies where he creates software for emergency response teams. Luciano Mammino wrote his first line of code at the age of 12 on his father's old i386. Since then he has never stopped coding. He is currently working at FabFitFun as principal software engineer where he builds microservices to serve millions of users every day.
Read more
  • 0
  • 0
  • 11367

article-image-5-key-skills-for-web-and-app-developers-to-learn-in-2020
Richard Gall
20 Dec 2019
5 min read
Save for later

5 Key skills for web and app developers to learn in 2020

Richard Gall
20 Dec 2019
5 min read
Web and application development can change quickly. Much of this is driven by user behavior and user needs - and if you can’t keep up with it, it’s going to be impossible to keep your products and projects relevant. The only way to do that, of course, is to ensure your skills are up to date and constantly looking forward to what might be coming. You can’t predict the future, but you can prepare yourself. Here are 5 key skill areas that we think web and app developers should focus on in 2020. Artificial intelligence It’s impossible to overstate the importance of AI in application development at the moment. Yes, it’s massively hyped, but that’s largely because its so ubiquitous. Indeed, to a certain extent many users won’t even realise their interacting with AI or machine learning systems. You might even say that that’s when its used best. The ways in which AI can be used by web and app developers is extensive and constantly growing. Perhaps the most obvious is personal recommendations, but it’s chatbots and augmented reality that are really pushing the boundaries of what’s possible with AI in the development field. Artificial intelligence might sound daunting if you’re primarily a web developer. But it shouldn’t - you don’t need a computer science or math degree to use it effectively. There are now many platforms and tools available to use machine learning technology out of the box, from Azure’s Cognitive Services, Amazon’s Rekognition, and ML Kit, built by Google for mobile developers. Learn how to build smart, AI-backed applications with Azure Cognitive Services in Azure Cognitive Services for Developers [Video]. New programming languages Earlier this year I wrote about how polyglot programming (being able to use more than one language) “allows developers to choose the right language to solve tough engineering problems.” For web and app developers, who are responsible for building increasingly complex applications and websites, in as elegant and as clean a manner as possible, this is particularly true. The emergence of languages like TypeScript and Kotlin attest to the importance of keeping your programming proficiency up to date. Moreover, you could even say that they highlight that, however popular core languages like JavaScript and Java are, there are now some tasks that they’re just not capable of dealing with. So, this doesn’t mean you should just ditch your favored programming languages in 2020. But it does mean that learning a new language is a great way to build your skill set. Explore new programming languages with eBook and video bundles here. Accessibility Web accessibility is a topic that has been overlooked for too long. That needs to change in 2020. It’s not hard to see how it gets ignored. When the pressure to deliver software is high, thinking about the consequences of specific design decisions on different types of users, is almost certainly going to be pushed to the bottom of developer’s priorities. But if anything this means we need a two-pronged approach - on the one hand developers need to commit to learning web accessibility themselves, but they also need to be evangelists in communicating its importance to non-technical team members. The benefits of this will be significant: it could be a big step towards a more inclusive digital world, but from a personal perspective, it will also help developers to become more aware and well-rounded in their design decisions. And insofar as no one’s taking real leadership for this at the moment, it’s the perfect opportunity for developers to prove their leadership chops. Read next: It’s a win for Web accessibility as courts can now order companies to make their sites WCAG 2.0 compliant JAMStack and (sort of) static websites Traditional CMSes like WordPress can be a pain for developers if you want to build something that is more customized than what you get out of the box. This is one of the reasons why JAMStack (a term coined by Netlify) is so popular - combining JavaScript, APIs, and markup, it offers web developers a way to build secure, performant websites very quickly. To a certain extent, JAMstack is the next generation of static websites; but JAMStack sites aren’t exactly static, as they call data from the server-side through APIs. Developers then call on the help of templated markup - usually in the form of static site generators (like Gatsby.js) or build tools - to act as a pre-built front end. The benefits of JAMstack as an approach are well-documented. Perhaps the most important, though, is that it offers a really great developer experience. It allows you to build with the tools that you want to use, and integrate with services you might already be using, and minimizes the level of complexity that can come with some development approaches. Get started with Gatsby.js and find out how to use it in JAMstack with The Gatsby Masterclass video. State management We’ve talked about state management recently - in fact, lots of people have been talking about it. We won’t go into detail about what it involves, but the issue has grown as increasing app complexity has made it harder to gain a single source of truth on what’s actually happening inside our applications. If you haven’t already, it’s essential to learn some of the design patterns and approaches for managing application state that have emerged over the last couple of years. The two most popular - Flux and Redux - are very closely associated with React, but for Vue developers Vuex is well worth learning. Thinking about state management can feel brain-wrenching at times. However, getting to grips with it can really help you to feel more in control of your projects. Get up and running with Redux quickly with the Redux Quick Start Guide.
Read more
  • 0
  • 0
  • 7480

article-image-app-and-web-development-in-2020-what-you-need-to-learn
Richard Gall
19 Dec 2019
6 min read
Save for later

App and web development in 2020: what you need to learn

Richard Gall
19 Dec 2019
6 min read
Web developers and app developers: not sure what you should be learning over the next 12 months? In such a fast-changing industry it can be hard to get a sense of which direction the proverbial wind is blowing, which means that when it comes to your future you can leave yourself in a somewhat weakened position.    So, if you’re looking for some ideas on how to organize your learning, look no further than this quick list.   Learn how to build microservices  Microservices are the dominant architectural model. There are many different reasons for this - the growth of APIs and web services, containerization - ultimately what’s important is that microservices allow you to build software in a way that’s modular. That promotes efficiency and agility which, in turn, empowers developers.   So, if you don’t yet have experience of developing microservices, you could do a lot worse than learning how to build them in 2020. Indeed, even if you don’t have a professional reason to learn more about microservices, exploring the topic through personal projects could be a real benefit to you in the future.   Find microservices content in Packt's range of cloud bundles. Learn a new language   This one might not be at the top of your agenda. If you’ve been working with JavaScript or Java for years, it seems strange to make time to learn something completely new. Surely, you’re probably thinking, I’d rather invest my time and energy into learning something that I can use at work?    In fact, learning a new language might just be one of the best things you can do in 2020. Not only will it give your resume a gentle kick, and potentially open up new opportunities for you in the future, it will also give you a better holistic understanding of how programming languages work, what their relative limitations and advantages over one another are. It’s a cliche that travel broadens the mind but when it comes to exploring and adventuring into new languages it’s certainly true.    But what should you learn? That, really is down to you, what you’re background is, and where you want to go. Some options are obvious - if you’re a Java developer Kotlin is the natural next step. Sometimes it’s not so obvious - JavaScript developers, for example might want to learn Go if they’re becoming more familiar with backend development, maybe even Rust or C++ if they’re feeling particularly adventurous and ambitious.  Explore new programming languages. Click here to find Packt's programming bundles of eBooks and videos. Learn a new framework Even if you don’t think learning a new language is appropriate for you, learning a new framework might be a more practical option that you can begin to use immediately. In the middle of the decade, when Angular.js was gaining traction across web development there was a lot of debate and discussion about the ‘best’ framework. Fortunately, this discourse has declined as it’s become clearer that choosing a framework is really about the best tool for the job, not a badge of personal identity. With this change, it means being open to learning new tools and frameworks will help you not only in terms of building out your resume, but also in terms of having a wide range of options for solving problems in your day to day work. Packt's web development bundles feature a range of eBooks on the most in-demand frameworks and tools. Explore them here. Re-learn a language you know Although it’s good to learn new things (obviously), we don’t talk enough about how valuable it can be to go back and learn something anew. This is for a couple of different reasons: on the one hand, it’s good to be able to review your level of understanding and to get up to speed with new features and capabilities that you might not have previously known about, but it also gives you a chance to explore a language from scratch and try to uncover a new perspective on it. This is particularly useful if you want to learn a new paradigm, like functional programming. Going back to core principles and theory is essential if you’re to unlock new levels of performance and control. Although it’s easy to dismiss theory as something academic, let’s make 2020 the year we properly begin to appreciate the close kinship between theory and practice. Learn how to go about learning It’s a given that being a developer means lifelong learning. And while we encourage you to take our advice and be open to new frameworks, microservices, and new programming languages, ultimately what you learn starts and ends with you. This isn’t easy - in fact, it’s probably one of the hardest aspects of working in technology full stop. That’s why, in 2020, make it your goal to learn more about learning. This article written by Jenn Schiffer is a couple of years old now but it articulates this challenge very well. She focuses mainly on some of the anxieties around learning new things and entering new communities, but the overarching point - that the conversation around how and why technologies should be used needs to be clearer - is a good one. True, there’s not much you can do about poor documentation or a toxic community, but you can think about your learning in a way that’s both open minded and well structured. For example, think about what you want or need to do - be reflective about your work and career. Be inquisitive and exploratory in your research - talk to people you know and trust, and read opinions and experiences from people you’ve never even heard of. By adopting a more intentional approach to the things you learn and the way you learn about them you’ll find that you’ll not only be able to learn new skills more efficiently, you’ll also enjoy it more too. Explore Packt’s full range of eBooks and videos on Packt store.
Read more
  • 0
  • 0
  • 8256
article-image-app-and-web-development-in-2019-what-we-loved-and-what-mattered
Richard Gall
17 Dec 2019
10 min read
Save for later

App and web development in 2019: What we loved and what mattered

Richard Gall
17 Dec 2019
10 min read
For app and web developers, the world at the end of the decade is very different to the one that began it. Sure, change is inevitable, but the way the discipline(s) have evolved in just a matter of years (arguably the most significant changes came in the latter half of the decade) is a mark of how technologies, business needs, customer expectations, and harsh economic realities have conspired to shape and remold our notion of what software development actually looks like. Full-stack, cloud-native, DevOps (and maybe even ‘NoOps’): all these things have been shaping the way app and web developers work over the last ten years. And in 2019 it feels like that new world is beginning to settle into a specific pattern. Many of the trends and technologies that really defined 2019 are, in truth, trends that have been nascent and emerging for a number of years. Cloud and microservices When cloud first emerged - at some point much earlier this decade - it was largely just about resource efficiency. The idea was to ditch your on premises servers and move instead to a model whereby you rent server space from big vendors. Okay, perhaps that’s a somewhat crude summation; but it’s nevertheless the case that cloud was primarily a field dealt with by administrators and IT professionals, rather than developers. Today, of course, cloud is having a very real impact on the way developers work, giving a degree of agility and flexibility in how software is deployed and managed. With cloud partnering nicely with microservices - which allow developers to break down an application into constituent parts - it’s easy to see how these two trends are getting many app and web developers excited. They shorten the development lifecycle and allow developers to get closer to their code as it runs in production. Learn cloud development - explore Packt's range of cloud bundles. Pick up 5 for $25 throughout our $5 campaign. An essential resource for microservices development: Microservices Development Cookbook. $5 for the rest of December and into January. Go and Rust The growth of Go and Rust throughout 2019 (okay, and a bit before that too) is directly related to the increasing importance of cloud and microservices in software development. Although JavaScript has been taken beyond the browser, it isn’t the best programming language for building high performance applications; that’s where the likes of Go and Rust have been taking over a not insignificant slice of the collective developer imagination. Both languages share a similar history (as this article nicely details); at a fundamental level, moreover, both also aim to build on C++, but with accessibility and safety in mind (C++ has long had a reputation for being both complicated and sometimes vulnerable to bugs and security issues). Go is likely to continue to grow at a faster rate than Rust: it’s a lot easier to use, so for web and app developers with experience in Java or JavaScript, it’s a much gentler learning curve. But this isn’t to say that Rust won’t remain a fixture for developers. Consistently ranked the ‘most loved’ language in Stack Overflow surveys, as developers seek relentless improvements to performance alongside watertight reliability and security, Rust will remain an important language in a fast-changing development world. Search Packt's extensive selection of Go eBooks and videos - $5 throughout December and into the new year. Visit the Packt store. Learn Rust with Rust Programming Cookbook. WebAssembly It’s impossible to talk about web and application development without mentioning WebAssembly. Arguably the full implications of WebAssembly are yet to be realised (indeed, at ReactConf 2019, Richard Feldman suggested that it was unlikely to initiate a wholesale transformation of the web - that, he believes, will take a few more years), but 2019 has been a year when it has properly started to make many developers sit up and take notice. But why is WebAssembly so exciting? Essentially, it allows you to run code on the web using multiple languages at a speed that’s almost akin to native applications. Indeed, WebAssembly is making languages like Rust more attractive to web developers. If WebAssembly is a bridge between Rust and JavaScript, Rust immediately becomes more attractive to developers who previously would have paid very little attention to it. If 2019 was the year more developers decided to take note of WebAssembly, 2020 will be the year when we start to see increased adoption. Learn WebAssembly is $5 throughout this year's $5 campaign. Get it here. State management: Redux, Flux, Vuex… For many years, MVC (Model-View-Controller) was the dominant model for managing application state. However, as applications have grown in complexity, it has become more and more difficult for us to establish a ‘single source of truth’ inside our apps.That can impact performance and can also make them harder to maintain on the development side. To tackle this, we’ve started to see a number of different patterns and frameworks emerging to help us manage application state. The growth of React has been instrumental here - as a very lightweight library it gives developers the freedom to manage application state however they choose - and it’s worth noting that Flux architecture was developed by Facebook to complement the library. Watch: Why do React developers love Redux for state management? https://www.youtube.com/watch?v=7YzgZA_hA48&feature=emb_title Following Flux we’ve also had Redux and Vuex - all of them, each with subtly different approaches, have become an essential aspect of modern web and app development. And while they might not have first emerged in 2019, it feels as though the state management discourse has hit the heights that it previously has not. If you haven’t yet had time to dive into this topic, it's well worth making sure you commit to it in 2020. Learning React with Redux and Flux [Video] is $5 - purchase it here on the Packt store. Learn Vuex with Vuex Quick Start Guide. Functional programming Functional programming is on the rise. This doesn’t however mean that purely functional languages like Haskell and Lisp are dominating the programming language landscape - in fact, it’s been said that JavaScript is now the language used for functional programming (even though it isn’t a functional language). Functional programming is popular because it can help minimize complexity and make it easier to test and reuse code. When you’re dealing with a dense codebase that grows and grows as your application scales, this is immensely valuable. It’s also worth placing functional programming in the context of managing application state. Insofar as functional programming allows you to be specific in determining how different parts of a component should interact with one another - the function is a theoretical abstraction that makes it easier to get to grips with managing the state of a complex and dynamic application. Get to grips with functional programming and discover how to leverage its power. Read Mastering Functional Programming. The new JavaScript framework boom I’m not sure whether JavaScript fatigue is over. On the one hand the space has coalesced around a handful of core tools and frameworks - React, GraphQL, Node.js, among a couple of others - but on the other hand, the last year (and a bit) have been characterized by many other small projects developed to support these core tools. So, while it’s maybe a little bit easier to parse the JavaScript ecosystem at pretty high level of abstraction than it was in the past, at a deeper level you have a range of tools that are designed for very specific purposes or to be used alongside some of those frameworks and tools just mentioned. Tools ranging from Koa.js (for Node), to Polymer, Nuxt, Next, Gatsby, Hugo, Vuelidate (to name just a random assortment) are all vying for developer mindshare. You could say that many of these tools are ‘second-order’ frameworks and libraries - they don’t fundamentally change the way you think about development but instead make it easier to do specific things. It’s for this reason that I’m reluctant to suggest that JavaScript fatigue will return to its former glory - this new JavaScript framework boom is very much geared towards productivity and immediate gains rather than overhauling the way you build applications because of some principled belief in the ‘right’ or ‘best’ way to do things. Learn Nuxt: pick up Build a News Feed with Nuxt 2 and Firestore [Video] for $5 before the end of the year. Get to grips with Next.js with Next.js Quick Start Guide. Learn Koa with Hands-on Server-Side Development with Koa.js [Video] Learn Gatsby with GatsbyJS: Build a PWA Blog with GraphQL, React, and WordPress [Video] GraphQL Much of this decade has been dominated by REST when it comes to APIs. But just as the so called ‘API economy’ has gone into overdrive, GraphQL has come on the scene. Adoption has been rapid, with many developers turning to it because it allows them to handle more complex and sophisticated requests at scale without writing long and confusing lines of code. This isn’t to say, of course, that GraphQL has all but killed REST. Instead, it’s more the case that GraphQL has been found to be a better tool for managing APIs in specific domains than REST. If you’re dealing with APIs that are complex in terms of the number of entities and their relationships between one another, then GraphQL can prove immensely useful. Find out how to put GraphQL to use. Pick up GraphQL Projects for $5 for the rest of December and into January. React Hooks (and Vue Hooks) Launched with React 16.8, React Hooks “let you use state and other React features without writing a class” (that’s from the project’s site). That’s a good thing because building components with a class can sometimes be somewhat inelegant. For a better explanation of the ‘point’ of React Hooks you could do a lot worse than this article. Vue Hooks is part of Vue 3.0 - this won’t be officially released until early next year. But the fact that both leading front end frameworks are taking similar approaches to improve the developer experience demonstrates that they’re responding to a need for more flexibility and control over large projects. That means 2019 has been the year that both tools have hit maturity in the web development space. Learn how React Hooks work with Packt's new React Hooks video. Conclusion The web and app development world is becoming difficult to parse. A few years ago discussion and debate really centered on frameworks; today it feels like there are many other elements to consider. Part of this is symptomatic of a slow DevOps revolution - the gap between build and production is smaller than it has ever been, and developers now have a significant degree of accountability and responsibility for things that were the preserve of different breeds of engineers and IT professionals. Perhaps that story is a bit of a simplification - however, it’s hard to dispute that the web and app developer skill set is incredibly diverse. That means there are an array of options and opportunities out there for those developers looking to push their careers forward, but it also means that they’ll need to do some serious decision making about what they want to do and how they want to do it.
Read more
  • 0
  • 0
  • 5790

article-image-w3c-world-wide-web-consortium-declares-webassembly-1-0-as-an-official-web-standard
Sugandha Lahoti
09 Dec 2019
3 min read
Save for later

W3C (World Wide Web Consortium) declares WebAssembly 1.0 as an official web standard

Sugandha Lahoti
09 Dec 2019
3 min read
Last Thursday, the World Wide Web Consortium declared Web Assembly 1.0 as an official W3C Recommendation. With this announcement, WebAssembly becomes the fourth language to run natively in browsers following HTML, CSS, and JavaScript. “The arrival of Web Assembly expands the range of applications that can be achieved by simply using Open Web Platform technologies. In a world where machine learning and Artificial Intelligence become more and more common, it is important to enabling high-performance applications on the Web, without compromising the safety of the users,” declared Philippe Le Hégaret, W3C Project Lead in the official press release. Web Assembly has been the talk of the town for providing a safe, portable, low-level code format designed for efficient execution and compact representation. According to the W3C consortium, WebAssembly enables the Web platform to perform a more efficient execution of computationally-intensive algorithms, which in turn makes it practical to deliver whole new classes of user experience on the Web and elsewhere. Because it is a platform-independent execution environment, it can also be used on any other computer platform. W3C has published three WebAssembly specifications as W3C Recommendations: WebAssembly Core Specification defines a low-level virtual machine that closely mimics the functionality of many microprocessors upon which it is run. WebAssembly Web API which defines a Promise-based interface for requesting and executing a .wasm resource. WebAssembly JavaScript Interface that provides a JavaScript API for invoking and passing parameters to WebAssembly functions. W3C is also working on a range of features for future versions of the standard. These include Threading: Threads provide the benefits of shared-memory multi-threading and atomic memory accesses. Fixed-width SIMD: Vector operations that execute loops in parallel. Reference types: Allow Web Assembly code to directly reference host objects. Tail calls: Enable calling functions without using extra stack space. ECMAScript module integration: Interact with JavaScript by loading WebAssembly executables as ES6 modules. There are many other longer-term projects that W3C is working on. Many of them are aimed at improving the usability and availability of Web Assembly. For example garbage collection, debugging interfaces, and Web Assembly System Interface (WASI). In other news, recently, Mozilla partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. Introducing Woz, a Progressive WebAssembly Application (PWA + WebAssembly) generator written entirely in Rust. You can now use WebAssembly from .NET with Wasmtime! 4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more.
Read more
  • 0
  • 0
  • 6094

article-image-4-predictions-by-richard-feldman-on-the-future-of-the-web-typescript-webassembly-and-more
Bhagyashree R
26 Nov 2019
8 min read
Save for later

4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more

Bhagyashree R
26 Nov 2019
8 min read
At ReactiveConf 2019, Richard Feldman, author of Elm in Action and creator of ‘elm-css’ made four predictions about how the future of web development will look like by the end of 2020 and 2025. ReactiveConf 2019 was a three-day functional programming event that happened from October 30 to November 1 at Prague. The event hosted a number of great talks sharing the latest global trends in web and mobile development. This year among the topics covered were PWA, optimizations, security, visualizations, accessibility, and diversity. Predicting the future of the web is about safer bets than about trends Feldman started out by asking a question that developers often come across: “which technology stack to choose for their next project?” Previously, it was often advised to go for technologies that are “boring” or mature instead of the latest and shiniest ones. Going by this advice Feldman and his team chose the technology that had the biggest library ecosystem, most mature in the LAMP stack, and was adopted by many successful companies: Perl. Since then, however, Perl gradually started to lose its popularity. The lesson that Feldman learned here was that “any technology that we choose, no matter how popular, how mainstream, how much traction it got today, you are still making a bet.” He says that predicting how the future for the current technologies will look like, and following that is safer than blindly accepting what everyone else is doing. After setting up the premise, Feldman moved on to sharing his predictions: Prediction 1: "TypeScript takes over the JS world" Back in 2012, Anders Hejlsberg, who is the original designer of C#, Delphi, and Turbo Pascal, came up with another programming language called TypeScript. This language was introduced as a “superset” of JavaScript that will help developers build JavaScript apps that scale. Some of the positives that this language brought to JavaScript development was excellent tooling enabled by static typing, self-documentation of code, continuous feedback from autocomplete, and more. Since its introduction, TypeScript has seen huge adoption. Almost all the big frontend frameworks such as React, Angular and Vue have extensive Typescript support. More and more JavaScript developers and framework authors are taking advantage of the excellent tooling and other benefits it provides. Its latest release, TypeScript 3.7 includes most-awaited features like assert signatures, recursive type aliases, top-level await, null coalescing, and optional chaining. [box type="shadow" align="" class="" width=""] Further Learning If you are interested in building with TypeScript and its latest features, check out our book, Learn TypeScript 3 by Building Web Applications by Sebastien Dubois and Alexis Georges. This book covers the very basics to the more advanced concepts while explaining many design patterns, techniques, frameworks, libraries, and tools along the way. You will learn a ton about modern web frameworks like Angular, Vue.js and React, and build cool web applications using those. It also covers modern front-end development tooling such as Node.js, npm, yarn, Webpack, Parcel, Jest, and many others. [/box] Despite its popularity, not everyone is using TypeScript. Along with verbose code, it is “unsound” by design and gives a false sense of security in some instances, Feldman shared. So, there are people who like TypeScript and there are people who don’t. The most important factor to predict how its future will look like is by seeing how it is affecting the teams actually using it. Feldman said, "I hear a lot of teams saying we are trying Typescript, we have used Typescript, or we are using TypeScript. I hear almost no teams saying we tried Typescript and then went back to JavaScript." [box type="shadow" align="" class="" width=""] Feldman predicted that by the end of 2020, Typescript will be the most common choice for new JS commercial projects. And by the end of 2025, he predicted that there will be more people writing in TypeScript on a daily basis than people writing vanilla JavaScript. [/box] Prediction 2: “WebAssembly is going to expand the web app pie” First announced in 2015, WebAssembly is Assembly for the browser with a compact binary format that runs with near-native execution speed. It is also a compilation target for other high-level languages including C/C++ and Rust. Its “closer to metal” property enables a number of computationally-intensive use cases on the web including games, media editing, speech synthesis, client-side computer vision, among others. Start your WebAssembly journey with our book Hands-On Game Development with WebAssembly by Rick Battagline. This book introduces web and game devs to the world of WebAssembly by walking through the development of a retro arcade game. WebAssembly is designed to work alongside JavaScript, which means you can call WebAssembly modules from JavaScript code. Though it can be used to improve the performance of JavaScript apps and libraries, Feldman doubts that this will be the only major way developers are going to use it in the future. This is because the existing performance of JavaScript is generally accepted and promising some percentage of improvement in speed is not going to be a game-changer for WebAssembly. Instead, Feldman believes that WebAssmebly will enable browsers to compete with apps stores and installers. Getting users to install an app can be a significant obstacle to adoption. WebAssembly can help distribute native code without code signing, app stores, and development kits. Also, the web as a delivery platform provides deep linking and other sharing capabilities. He explained this through the example of Figma, a collaborative interface design tool built in C++, which users can access just going to a URL. However, distributing applications built in Rust, C++, or Go on web does not mean the end of HTML, CSS, and JavaScript. WebAssembly will simply expand what he calls as the “web app pie.” [box type="shadow" align="" class="" width=""]Feldman predicted that by the end of 2020, WebAssembly will not make much difference to the makeup of the web. By the end of 2025, however, we will start to see the niche of heavyweight web apps that are basically native apps distributed through the browser.[/box] Prediction 3: “npm lasts, surviving further problems” In recent years, developers have witnessed and survived quite a few npm disasters. In 2016, a developer unpublished more than 250 npm-managed modules that affected Node, Babel and thousands of other projects. Then in 2018, we saw the event-stream case, in which an ill-intentioned user took ownership of the widely-used package through social engineering and infected it with a malicious package. Another problem with npm is that it can allow the execution of arbitrary code from thousands and thousands of packages through the “postinstall” hook in package.json. Feldman recommends disabling “postinstall” and “preinstall” scripts by using the following command: npm config set ignore-scripts true Also, we are seeing some alternatives to npm. Feldman mentioned about Entropic, a federated package registry with a new CLI introduced by the former CTO of npm, C J Silverio. Feldman believes that despite these alternatives, financial, security, or other problems developers will continue to use npm because of its strong network effects. [box type="shadow" align="" class="" width=""]Drawing from these events, Feldman predicted that by the end of 2020, we can expect one more security incident. By the end of 2025, he predicts that we might see at least one malicious npm package infecting many developers’ machines.[/box] Prediction 4: "JS alternatives stay niche, but age well" When it comes to JavaScript alternatives, we have two options: JS dialects and non-JS dialects. Some of the JS dialects are TypeScript, Dart, Coffeescript, among others. Whereas, non-JS dialects include ClojureScript, ReasonML, and Elm, which provide a different experience than writing JavaScript. Representing the Elm core team at the event, Feldman listed a few reasons why developers should try Elm. It renders faster and generates smaller builds than most top JS frameworks and almost never crashes. It has its own package ecosystem and is often praised for its very detailed error messages. After sharing the benefits of Elm, Feldman concluded that JavaScript alternatives will stay niche, but age well. This essentially means that people who have chosen these alternatives and are happy with them will continue to use them regardless of the popularity of TypeScript. [box type="shadow" align="" class="" width=""]By the end of 2020, compile-to-JS languages will continue to grow, but not as fast as TypeScript. By the end of 2025, non-JavaScript dialects will have aged well, although at that time TypeScript will still be more popular.[/box] Want to add TypeScript to your skillset? Check out our book, Learn TypeScript 3 by Building Web Applications by Sebastien Dubois and Alexis Georges. It is a comprehensive guide that teaches how to wisely use the latest features in TypeScript 3.  You will learn how to build web applications with Angular, Vue.js and React and use modern front-end development tooling such as Node.js, npm, yarn, Webpack, Parcel, Jest, and many others. Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 10698
article-image-web-assembly-beyond-browser-bytecode-alliance-mozilla-fastly-intel-red-hat
Sugandha Lahoti
13 Nov 2019
4 min read
Save for later

Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastly, Intel and Red Hat partnership

Sugandha Lahoti
13 Nov 2019
4 min read
Mozilla has partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. “WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers,” explained Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly. “This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way.” What is the Bytecode Alliance This year we saw many initiatives by Mozilla pushing WebAssembly beyond the web. The Bytecode Alliance is also a step forward in that direction with an aim to make it safe for developers to use untrusted code, no matter where they’re running it—whether on the cloud, desktop or IoT devices. This common, reusable set of foundations can be used on their own or embedded in other libraries and applications. As developers run untrusted code in many new places, from the cloud to IoT devices, they open up security and portability challenges. With WebAssembly and emerging related standards such as WASI and WebAssembly Interface Types, the Alliance plans to address challenges that occur when you try to run the same code across these different systems (server, edge, browser, mobile, and more platforms). Bytecode will combine Wasmtime, a runtime for WebAssembley and WASI, Fastly’s Lucet, WebAssembly compiler, and runtime, Intel’s WebAssembly Micro Runtime and code generator Cranelift. It will also include cargo-wasi, a lightweight Cargo subcommand that compiles Rust code to target WebAssembly as well as wat and wasmparser.  These set of projects are likely to expand as the Alliance grows. Once the Alliance is formalized, an open governance model consistent with open-source best practices and strong community norms will be established. Individual developers can also participate in an open-source project in the Bytecode Alliance. Each project is governed by its own committer group. Developers who are very active in shaping a project are eligible for nomination to the project’s committer group. Introducing WebAssembly nanoprocesses Apart from these existing projects, the Alliance is also focusing on a new architectural pattern emerging in WebAssembly. This pattern called the "WebAssembly nanoprocess", gives you most of the benefits of a process (the tool that operating systems use to protect programs from each other), but with much less overhead and much faster communication between (nano)processes. This process, Mozilla thinks, can make massively-modular code (like the ones you see in npm, crates, and PyPI) secure by default. This means developers won't need to worry as much about vulnerabilities in dependencies, or attackers sneaking malicious code into their codebases. Nanoprocess can also provide memory isolation that fits anywhere. It can handle requests for tens of thousands of customers on the same machine. It can also provide software fault isolation for individual libraries in native applications. The blog mentions, “With wasm, we can replace microservices with nanoprocesses and get the same security and language independence benefits. It gives us the composability of microservices without the weight." This alliance was well appreciated by people and developers posting their reactions on Twitter. https://twitter.com/tschneidereit/status/1194382954858000384 https://twitter.com/curioman2/status/1194475559012556800 https://twitter.com/fabianfranz/status/1194408360608698370 Learn more about this alliance on Mozilla’s blog. Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 3058

article-image-react-conf-2019-concurrent-mode-preview-out-css-in-js-react-docs-in-40-languages-and-more
Bhagyashree R
29 Oct 2019
9 min read
Save for later

React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more

Bhagyashree R
29 Oct 2019
9 min read
React Conf 2019 wrapped up last week. It was kick-started with a keynote by Tom Occhino and Yuhi Zheng from the React team who both talked about Concurrent Mode and Suspense. Then followed by Frank Yan also from the React team, who explained how they are building the “new Facebook” with React and Relay. One of the major highlights of his talk was the CSS-in-JS library that will be open-sourced once ready. Sophie Alpert, former manager of the React team gave a talk on building a custom React renderer. To demonstrate that, she implemented a small version of ReactDOM in just 30 minutes. There were many other lightning talks and presentations on translated React, building inclusive apps by improving their accessibility, and much more. React Conf 2019 is a two-day event that took place from Oct 24-25 at Lake Las Vegas, Nevada. This conference brought together front-end and full-stack developers to “share knowledge, skills, to network, and just to have fun.” React's long-term goal: "Making it easier to build great user experiences" Tom Occhino, Engineering Director of the React group, took to the stage to talk about the goals for React and the community. He says that React’s long-term goal is to make it easier for developers to build great user experiences. “Easier to build” means improving the developer experience. The three factors that contribute to a great developer experience are a low barrier to entry, developer productivity, and ability to scale. React is constantly working towards improving the developer experience by introducing new features. Two such features are: Concurrent Mode and Suspense. Concurrent Mode Concurrent Mode is a set of features to make React apps more responsive by rendering component trees without blocking the main thread. It gives React the ability to interrupt big blocks of low-priority work in order to focus on higher priority work like responding to user input. This will enable React to work on several state updates concurrently and removing jarring and too frequent DOM updates. The team also released the first early community preview of Concurrent Mode last week. https://twitter.com/reactjs/status/1187411505001746432 Suspense Suspense was introduced as an improvement to the developer experience when dealing with asynchronous data fetching within React apps. It suspends your component rendering and shows a fallback until some condition is met. Occhino describes Suspense as a “React system for orchestrating asynchronous loading of code, data, and resources.” He adds, “Suspense lets the component wait for something before they render. This helps consolidate nested dependencies and nested spinners and things behind the single simple loading experience.” Towards the end of his keynote, Occhino also touched upon how the team plans to make the React community more inclusive and diverse. He said, “Over the past 10 years, I have learned that diverse teams build better products and make better decisions. Everyone working on React shares my conviction about this.” He adds, “Up until recently we have taken a pretty passive stance to building and shaping the React community. We have a responsibility to you all and I feel like we let many of you down. We are committed to doing better!” As a first step, the team has now replaced the React code of conduct with the contributor covenant. Read also: #Reactgate forces React leaders to confront community’s toxic culture head on What’s new the React team is working on Yuzi Zheng, Engineering Manager for React and Relay team at Facebook gave an insight into what projects the core teams are working on. She started off by giving a recap of hooks, which was one of the most-awaited React features announced at React Conf 2018. “Hooks are designed for the future of React in the way that it naturally encourages code that is compatible with all the plumbing features such as accessibility, server-side rendering, suspense, and concurrent mode. Since its release, the reception of Hooks has been really positive,” she shared. If you want to understand the fundamentals of React Hooks and use them for implementing responsive design and more, check out our book, Learning Hooks. Another long-term project that the team is focusing on is providing developers a way to easily build accessibility features in React. Currently, developers can create accessible websites using standard HTML techniques, but it does have some limitations. To help building accessibility directly into React the team is working on two areas: managing focus and input interfaces. For managing focus, the team plans to add primitives that provide “a more structured way of making sure component flows well” for cases like React portals and Suspense fallback and are accessible by default. For input interfaces, they plan to add support for rich gestures that work across platforms and are accessible by default. The team is also focusing on improving the initial render times. Server-side rendering helps in reducing the amount of CPU usage on the client for the initial render to some extent, but it does have some limitations. To meet these limitations, the team plans to add built-in support for server-side rendering. This will work with lazily loaded components to reduce the bytes needed on the client, support streaming down markups in chunks, and be fully-compatible with Concurrent Mode and Suspense. The CSS-in-JS library Frank Yan, Engineering Manager in the React group at Facebook talked about how the team has rebuilt and redesigned the Facebook website and the key lessons they have learned along the way. The new Facebook website is a single-page app with React organizing the HTML and JavaScript into components from the top down and with GraphQL and Relay colocating the queries declaratively in the components. The only key part that the team did not reorganize was CSS. They instead created a new library to embed styles in components called CSS-in-JS. It aims to make the styles easier to read, understand, and update. Its syntax is inspired by React Native and other frameworks. Since it enables you to embed styles inside JavaScript files, you can also use JavaScript tooling like type checkers and linters. React docs translated into 40 languages Nat Alison is a freelance front-end developer who helped the React team coordinate translations of reactjs.org into 40 languages. She shared why and how they were able to translate the docs for this massively popular library. She shared, “More than 80% of the world’s population does not know English. If we restrict React, one of the most popular JavaScript frameworks, we restrict who gets to create and shape the web.” Providing the officially translated docs will make it easier for several non-English speaking React developers to understand and use it in their projects. This will also prevent users from creating unofficial translations, which can be incorrect, outdated, or difficult to find. Initially, they thought of integrating a SaaS platform that allows users to submit translations, but this was not a feasible solution. Then they decided to check out the solution used by Vue, which is maintaining separate repositories for each language forked from the original repo. Similar to Vue, the team also created a bot that periodically tracks for changes in the English repo and submits pull requests whenever there is a change. If you want to contribute to translating React docs in your language, check out the IsReactTranslatedYet website. Developing accessible apps Brittany Feenstra, a developer at Formidable, took to the stage to talk about why accessibility is important and how you can approach it. Accessibility or a11y is making your apps and websites usable for everyone, including people with any kind of disabilities.  There are four types of disabilities that developers need to design for: visual, auditory, motor, and cognitive. Feenstra mentioned that though we all are aware of the importance of accessibility, we often “end up saving it for later” because of tight deadlines. Feenstra, however, compares accessibility with marathons. It is not something that you can achieve in just one sprint, she says. You should instead look at it as a training program that you will follow when participating in a marathon. You need to take a step-by-step approach to make an accessible app. If we do that “we will be way less fatigued and well-equipped,” she adds. Sharing some starting tips she said that we need to focus on three areas. First, learn to run, or in accessibility context, understand the HTML semantics then explore reference patterns, navigation, and focus traps. Second, improve nutritional habits, or in accessibility context, use environments and tools that help us write sturdier code. She recommends using axe, an accessibility checker for WCAG 2 and Section 508 accessibility. Also, check out the tools that basically simulate how people with visual impairment will see your UI such as NoCoffee and I want to see like the colour blind. She emphasizes on linting and testing your code for accessibility with the help of eslint-plugin-jsx-a11y and accessibility assessment automation tools. Third, cross-train and stretch, or in accessibility context, learn to “interact with the UI in ways that let us understand the update we are making to our code.” “React is Fiction” This was a talk by Jenn Creighton, a Frontend Architect at The Wing, who comes from a creative writing background. “Writing React to me felt like coming home. It was really familiar in a way that I could not pinpoint,” she said. Then she realized that writing React reminded her of fiction and merging the two disciplines helped her write better components. Creighton drew the similarities between developing in React and creative writing. One of the key principles of creative writing is “Show, don’t tell” that advises authors to describe a situation instead of just telling it. This will help engage the readers as they will be able to picture the situation in their heads. According to Creighton, React also has a similar principle: “Declarative, not imperative.”  React is declarative, which allows developers to describe what the final state should be, instead of listing all the steps to reach that state. There were many other exciting talks about progressive web animations, building React-Select, and more. Check out the live streams to watch the full talks: Day1: https://www.youtube.com/watch?v=RCiccdQObpo Day2: https://www.youtube.com/watch?v=JDDxR1a15Yo&t=2376s Ionic React released; Ionic Framework pivots from Angular to a native React version ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more React Native 0.61 introduces Fast Refresh for reliable hot reloading
Read more
  • 0
  • 0
  • 5553