Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-github-sponsors-could-corporate-strategy-eat-foss-culture-for-dinner
Sugandha Lahoti
24 May 2019
4 min read
Save for later

Github Sponsors: Could corporate strategy eat FOSS culture for dinner?

Sugandha Lahoti
24 May 2019
4 min read
Yesterday, at the GitHub Satellite event 2019, GitHub launched probably its most game-changing yet debatable feature - Sponsors. GitHub Sponsors works exactly like Patreon, in the sense that developers can sponsor the efforts of a contributor "seamlessly through their GitHub profiles". Developers will be able to opt into having a “Sponsor me” button on their GitHub repositories and open-source projects where they will be able to highlight their funding models. GitHub shared that they will cover payment processing fees for the first 12 months of the program to celebrate the launch. “100% percent of your sponsorship goes to the developer," GitHub wrote in an announcement. At launch, this feature is marked as "wait list" and is currently in beta. To start off this program, the code hosting site has also launched GitHub Sponsors Matching Fund. This means that it will match all contributions up to $5,000 during a developer’s first year in GitHub Sponsors. GitHub sponsors could prove beneficial for developers working on open source software, that isn't profitable. This way they can easily raise money from GitHub directly which is the leading repository for open-source software. More importantly, GitHub sponsors is not just limited to software developers, but all open-source contributors, including those who write documentation, provide leadership or mentor new developers, for example. This and the promising zero fees to use the program has got people excited. https://twitter.com/rauchg/status/1131807348820008960 https://twitter.com/EricaJoy/status/1131640959886741504 While on the flip side, GitHub Sponsors can also limit the essence of what open source is, by financially influencing developers on what they will work on. It may drive open-source developers to focus on projects that are more likely to attract financial contributions over projects which are more interesting and challenging but aren’t likely to find financial backers on GitHub. This can hurt FOSS contributions as people start to expect to be paid rather than doing it for inherent motivations. This, in turn, could lead to toxic politics among project contributors regarding who gets credit and who gets paid. Companies could also use GitHub sponsorships to judge the health of open source projects. People are also speculating that this can possibly be Microsoft’s (GitHub’s parent company) strategy to centralize and enclose open source community dynamics, as well as benefit from its monetization. Some are also wondering the plausible effects of monetization on OSS, which can possibly lead to mega corporations profiteering off free labor, thus changing the original vision of an open source community. https://twitter.com/andrestaltz/status/1131521807876591616 Andre Staltz also made an interesting point about the potential on the zero fee model driving out other open source payment models from existence. He believes once Microsoft’s dominance is achieved Github's commissions could go up. https://twitter.com/andrestaltz/status/1131526433027837952 A Hacker News user also conjectured that this may also get Microsoft access to data on top-notch developers. “Will this mean that Microsoft gets a bunch of PII on top-notch developers (have to enter name + address info to receive or send payments), and get much more value from that data than I can imagine?” At present GitHub is offering this feature as an invite-only beta with a waitlist, it will be interesting to see if and how this will change the dynamics of open source collaboration, once it rolls out fully. A tweet observes: “I think it bears repeating that the path to FOSS sustainability is not individuals funding projects. We will only reach sustainability when the companies making profit off our work are returning value to the Commons.” Read our full coverage on GitHub Satellite here. To know more about GitHub sponsors, visit the official blog. GitHub Satellite 2019 focuses on community, security, and enterprise GitHub announces beta version of GitHub Package Registry, its new package management service GitHub deprecates and then restores Network Graph after GitHub users share their disapproval
Read more
  • 0
  • 0
  • 3312

article-image-postgresql-12-beta-1-released
Fatema Patrawala
24 May 2019
6 min read
Save for later

PostgreSQL 12 Beta 1 released

Fatema Patrawala
24 May 2019
6 min read
The PostgreSQL Global Development Group announced yesterday its first beta release of PostgreSQL 12. It is now also available for download. This release contains previews of all features that will be available in the final release of PostgreSQL 12, though some details of the release could also change. PostgreSQL 12 feature highlights Indexing Performance, Functionality, and Management PostgreSQL 12 will improve the overall performance of the standard B-tree indexes with improvements to the space management of these indexes as well. These improvements also provide a reduction of index size for B-tree indexes that are frequently modified, in addition to a performance gain. Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently, which lets you perform a REINDEX operation without blocking any writes to the index. This feature should help with lengthy index rebuilds that could cause downtime when managing a PostgreSQL database in a production environment. PostgreSQL 12 extends the abilities of several of the specialized indexing mechanisms. The ability to create covering indexes, i.e. the INCLUDE clause that was introduced in PostgreSQL 11, has now been added to GiST indexes. SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN) queries for data types that support the distance (<->) operation. The amount of write-ahead log (WAL) overhead generated when creating a GiST, GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which provides several benefits to the disk utilization of a PostgreSQL cluster and features such as continuous archiving and streaming replication. Inlined WITH queries (Common table expressions) Common table expressions (or WITH queries) can now be automatically inlined in a query if they: a) are not recursive b) do not have any side-effects c) are only referenced once in a later part of a query This removes an "optimization fence" that has existed since the introduction of the WITH clause in PostgreSQL 8.4 Partitioning PostgreSQL 12 while processing tables with thousands of partitions for operations, it only needs to use a small number of partitions. This release also provides improvements to the performance of both INSERT and COPY into a partitioned table. ATTACH PARTITION can now be performed without blocking concurrent queries on the partitioned table. Additionally, the ability to use foreign keys to reference partitioned tables is now permitted in PostgreSQL 12. JSON path queries per SQL/JSON specification PostgreSQL 12 now allows execution of JSON path queries per the SQL/JSON specification in the SQL:2016 standard. Similar to XPath expressions for XML, JSON path expressions let you evaluate a variety of arithmetic expressions and functions in addition to comparing values within JSON documents. A subset of these expressions can be accelerated with GIN indexes, allowing the execution of highly performant lookups across sets of JSON data. Collations PostgreSQL 12 now supports case-insensitive and accent-insensitive comparisons for ICU provided collations, also known as "nondeterministic collations". When used, these collations can provide convenience for comparisons and sorts, but can also lead to a performance penalty as a collation may need to make additional checks on a string. Most-common Value Extended Statistics CREATE STATISTICS, introduced in PostgreSQL 12 to help collect more complex statistics over multiple columns to improve query planning, now supports most-common value statistics. This leads to improved query plans for distributions that are non-uniform. Generated Columns PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet. Pluggable Table Storage Interface PostgreSQL 12 introduces the pluggable table storage interface that allows for the creation and use of different methods for table storage. New access methods can be added to a PostgreSQL cluster using the CREATE ACCESS METHOD command and subsequently added to tables with the new USING clause on CREATE TABLE. A table storage interface can be defined by creating a new table access method. In PostgreSQL 12, the storage interface that is used by default is the heap access method, which is currently is the only built-in method. Page Checksums The pg_verify_checkums command has been renamed to pg_checksums and now supports the ability to enable and disable page checksums across a PostgreSQL cluster that is offline. Previously, page checksums could only be enabled during the initialization of a cluster with initdb. Authentication & Connection Security GSSAPI now supports client-side and server-side encryption and can be specified in the pg_hba.conf file using the hostgssenc and hostnogssencrecord types. PostgreSQL 12 also allows for discovery of LDAP servers based on DNS SRV records if PostgreSQL was compiled with OpenLDAP. Few noted behavior changes in PostgreSQL 12 There are several changes introduced in PostgreSQL 12 that can affect the behavior as well as management of your ongoing operations. A few of these are noted below; for other changes, visit the "Migrating to Version 12" section of the release notes. The recovery.conf configuration file is now merged into the main postgresql.conf file. PostgreSQL will not start if it detects thatrecovery.conf is present. To put PostgreSQL into a non-primary mode, you can use the recovery.signal and the standby.signal files. You can read more about archive recovery here: https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY Just-in-Time (JIT) compilation is now enabled by default. OIDs can no longer be added to user created tables using the WITH OIDs clause. Operations on tables that have columns that were created using WITH OIDS (i.e. columns named "OID") will need to be adjusted. Running a SELECT * command on a system table will now also output the OID for the rows in the system table as well, instead of the old behavior which required the OID column to be specified explicitly. Testing for Bugs & Compatibility The stability of each PostgreSQL release greatly depends on the community, to test the upcoming version with the workloads and testing tools in order to find bugs and regressions before the general availability of PostgreSQL 12. As this is a Beta, minor changes to database behaviors, feature details, and APIs are still possible. The PostgreSQL team encourages the community to test the new features of PostgreSQL 12 in their database systems to help eliminate any bugs or other issues that may exist. A list of open issues is publicly available in the PostgreSQL wiki. You can report bugs using this form on the PostgreSQL website: Beta Schedule This is the first beta release of version 12. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2019. For further information please see the Beta Testing page. Many other new features and improvements have been added to PostgreSQL 12. Please see the Release Notes for a complete list of new and changed features. PostgreSQL 12 progress update Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial]
Read more
  • 0
  • 0
  • 3224

article-image-5-reasons-node-js-developers-might-actually-love-using-azure-sponsored-by-microsoft
Richard Gall
24 May 2019
6 min read
Save for later

5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft]

Richard Gall
24 May 2019
6 min read
If you’re a Node.js developer, it might seem odd to be thinking about Azure. However, as the software landscape becomes increasingly cloud native, it’s well worth thinking about the cloud solution you and your organization uses. It should, after all, make life easier for you as much as it should help your company scale and provide better services and user experiences for customers. We don’t often talk about it, but cloud isn’t one thing: it’s a set of tools that provide developers with new ways of building and managing apps. It helps you experiment and learn. In the development of Azure, developer experience is at the top of the agenda. In many ways the platform represents Microsoft’s transformation as an organization, from one that seemed to distrust open source developers, to one which is hell bent on making them happier and more productive. So, if you’re a Node.js developer - or any kind of JavaScript developer, for that matter - reading this with healthy scepticism (and why wouldn’t you), let’s look at some of the reasons and ways Azure can support you in your work... This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. Deploy apps quickly with Azure App Service  As a developer, deploying applications quickly is one of your top priorities. Azure does that thanks to Azure App Service. Essentially, Azure App Service is a PaaS that brings together a variety of other Azure services and resources helping you to develop and host applications without worrying about your infrastructure.   There are lots of reasons to love Azure App Service, not least the speed with which it allows you to get up and running, but most importantly it gives application developers access to a range of Azure features, such as load balancing and security, as well as the platforms integrations with tools for DevOps processes.   Azure App Service works for developers working with a range of platforms, from Python to PHP - Node.js developers that want to give it a try should start here.   Manage application and infrastructure resources with the Azure CLI The Azure CLI is a useful tool for managing cloud resources. It can also be used to deploy an application quickly. If you’re a developer that likes working with the CLI, this feature really does offer a nice way of working, allowing you to easily move between each step in the development and deployment process. If you want to try deploying a Node.js application using the Azure CLI, check out this tutorial, or learn more about the Azure CLI here. Go serverless with Azure Functions Serverless has been getting serious attention over the last 18 months. While it’s true that serverless is a hyped field, and that in reality there are serious considerations to be made about how and where you choose to run your software, it’s relatively easy to try it out for yourself using Azure. In fact, the name itself is useful in demystifying serverless. The word ‘functions’ is a much more accurate description what you’re doing as a developer. A function is essentially a small piece of code that runs in the cloud that execute certain actions or tasks in specific situations. There are many reasons to go serverless, from a pay per use pricing model to support for your preferred dependencies. And while there are plenty of options in terms of cloud providers, Azure is worth exploring because it makes it so easy for developers to leverage. Learn more about Azure Functions here. Simple, accessible dashboards for logging and monitoring In 2019 building more reliable and observable systems will expand from the preserve of SREs and become something developers are accountable for too. This is the next step in the evolution of software engineering, as new silos are broken down. It’s for this reason that the monitoring tools offered by Azure could prove to be so valuable for developers. With Application Insights and Azure Monitor, you can gain the level of transparency you need to properly manage your application. Learn how to successfully monitor a Node.js app here. Build and deploy applications with Azure DevOps DevOps shouldn’t really require additional effort and thinking - but more often than not it does. Azure is a platform that appears to understand this implicitly, and the team behind it have done a lot to make it easier to cultivate a DevOps culture with several useful tools and integrations. Azure Test Plans is a toolkit for testing applications, which can seriously help you improve the way you test in your development processes, while Azure Boards can support project management from inside the Azure ecosystem - useful if you’re looking for a new way to manage agile workflows. But perhaps the most important feature within Azure DevOps - for developers at least - is Azure Pipelines. Azure Pipelines is particularly useful for JavaScript developers as it gives you the option to run a build pipeline on a Microsoft hosted agent that has a wide range of common JavaScript tools (like Yarn and Gulp) pre-installed. Microsoft claims this is the “simplest way to build and deploy” because “maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.” Find out how to build, test, and deploy Node.js apps using Azure Pipelines with this tutorial. Read next: 5 developers explain why they use Visual Studio Code Conclusion: Azure is a place to experiment and learn We often talk about cloud as a solution or service. And although it can provide solutions to many urgent problems, it’s worth remembering that cloud is really a set of many different tools. It isn’t one thing. Because of this, cloud platforms like Azure are as much places to experiment and try out new ways of working as it is simply someone else’s server space. With that in mind, it could be worth experimenting with Azure to try out new ideas - after all, what’s the worst that can happen? More than anything, cloud native should make development fun. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 4459

article-image-github-satellite-2019-focuses-on-community-security-and-enterprise
Bhagyashree R
24 May 2019
6 min read
Save for later

GitHub Satellite 2019 focuses on community, security, and enterprise

Bhagyashree R
24 May 2019
6 min read
Yesterday, GitHub hosted its annual product and user conference, GitHub Satellite 2019, in Berlin, Germany. Along with introducing a bunch of tools for better reliability and collaboration, this year GitHub also announced a new platform for funding contributors to a project. The announcements were focused on three areas: community, security, and enterprise. Here are some of the key takeaways from the event: Community: Financial support for open source developers GitHub has launched a new feature called GitHub Sponsors, which allows any developer to sponsor the efforts of a contributor "seamlessly through their GitHub profiles". At launch, this feature is marked as "wait list" and is currently in beta. GitHub shared that it will not be charging any fees for using this feature and will also cover the processing fees for the first year of the program. "We’ll also cover payment processing fees for the first 12 months of the program to celebrate the launch. 100% percent of your sponsorship goes to the developer," GitHub wrote in an announcement. To start off this program, the code hosting site has also launched GitHub Sponsors Matching Fund. This means that it will match all contributions up to $5,000 during a developer’s first year in GitHub Sponsors. It would be an understatement if I say that this was one of the biggest announcements at the GitHub Satellite event. https://twitter.com/EricaJoy/status/1131640959886741504 https://twitter.com/patrickc/status/1131556816721080324 GitHub also announced Tidelift as a launch partner with over 4,000 open source projects on GitHub, eligible for income from Tidelift through GitHub Sponsors. In a blog post, Tidelift wrote, “Over the past year, we’ve seen the rapid rise of a broad-based movement to pay open source maintainers for the value they create. The attention that GitHub brings to this effort should only accelerate this momentum. And it just makes sense—paying the maintainers for the value they create, we ensure the vitality of the software at the heart of our digital society.” Read the official blog on GitHub Sponsors for more information. Security: “It’s more important than ever that every developer becomes a security developer.” The open source community is driven by the culture of collaboration and trust. Nearly every application that is built today has some dependence on open source software. This is its biggest advantage as it saves you from reinventing the wheel. But, what if someone in this dependence cycle misuses the trust and leaks a malware into your application? Sounds like a nightmare, right? To address this, GitHub announced a myriad of security features at GitHub Satellite that will make it easy for developers to ensure code safety: Broaden security vulnerability alerts So far, security vulnerability alerts were shown for projects written in .NET, Java, JavaScript, Python, and Ruby. GitHub with WhiteSource has now expanded this feature to detect potential security vulnerabilities in open source projects in other languages as well. Whitesource is an open source security and license compliance management platform, which has developed an “Open Source Software Scanning” that scans the open source components of your project. The alerts will also be more detailed to enable developers to assess and mitigate the vulnerabilities. Dependency insights Through dependency insights, developers will be able to quickly view vulnerabilities, licenses, and other important information for the open source projects their organization depends on. This will come in handy when auditing dependencies and their exposure when a security vulnerability is released publicly. This feature leverages dependency graph giving enterprises full visibility into their dependencies including details on security vulnerabilities and open source licenses. Token scanning GitHub announced the general availability of token scanning at GitHub Satellite, a feature that enables GitHub to scan public repositories for known token formats to prevent fraudulent use of credentials that happen accidentally. It now supports more token formats including Alibaba Cloud, Mailgun, and Twilio. Automated security fixes with Dependabot To make it easier for developers to update their project's dependencies, GitHub will now come integrated with Dependabot, as announced at GitHub Satellite. This will allow GitHub to check your dependencies for known security vulnerabilities. It will then automatically open pull requests to update them to the minimum possible secure. These automated security requests will contain information about the vulnerability like release notes, changelog entries, and commit details. Maintainer security advisories (beta) GitHub now provides open source maintainers a private workspace where they can discuss, fix, and publish security advisories. You can find the security advisories in your dependencies using the "Security" tab on the GitHub interface. More GitHub security updates announced at GitHub Satellite available here. Enterprise: Becoming an "open source enterprise" The growing collaboration between enterprises and the open source community has enabled innovation at scale. To further make this collaboration easier GitHub has introduced several improvements to its Enterprise offering at GitHub Satellite: Enterprise account connects organizations to collaborate and build inner source workflows. Its new admin center meets security and compliance needs with global administration and policy enforcement. Two new user roles, Triage and Maintain, allows enterprise teams to secure and address their access control needs. Now administrators can recruit help, like triaging issues or managing users, from trusted contributors without also granting the ability to write to the repository or to change repository settings. Enterprises can now add groups from their identity provider to a team within GitHub and automatically keep membership synchronized. Enterprises can create internal repositories that are visible only to their developers. This can help them reuse code and build communities within their company. GitHub Enterprise Cloud administrators can access audit log events using GraphQL API to analyze data on user access, team creation, and more. Enterprises can create a draft pull request to ask for input, get feedback on an approach, and refine work before it’s ready for review. Customers will also be protected for their use of GitHub from claims alleging that GitHub products or services infringe third-party IP rights. Learn more about GitHub Enterprise offering here. These are the major updates. For detailed coverage, we recommend you watch the complete GitHub Satellite event that was live streamed yesterday. Next, for Github, is the GitHub Universe conference taking place November 13-14 at San Francisco. GitHub announces beta version of GitHub Package Registry, its new package management service GitHub deprecates and then restores Network Graph after GitHub users share their disapproval Apache Software Foundation finally joins the GitHub open source community
Read more
  • 0
  • 0
  • 2710
Banner background image

article-image-amazon-rejects-all-11-shareholder-proposals-including-the-employee-led-climate-resolution-at-annual-shareholder-meeting
Fatema Patrawala
23 May 2019
8 min read
Save for later

Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting

Fatema Patrawala
23 May 2019
8 min read
Amazon shareholders on Wednesday in an annual company meet voted down a proposal from more than 7,500 Amazon employees for climate justice. Employees in the proposal have put forward demands for Jeff Bezos to create a comprehensive climate-change plan for the company. Last week employees also received support from two of the largest proxy advisors to institutional investors, which agreed that shareholders should vote Yes for the proposal. But yesterday these efforts faced a major setback when the shareholders did not pass the proposal. Amazon’s annual proxy statement included 11 resolutions including Amazon’s controversial facial recognition technology, demands for more action on climate change, salary transparency, and other equity issues. According to reports from Geekwire, all 11 resolutions have been voted down by the shareholders. Amazon did not release shareholder vote totals yesterday but said information would be filed with the U.S. Securities and Exchange Commission later this week. Listed below are the 11 resolutions presented at the meeting: Food waste: Shareholders want Amazon to issue an annual report on the environmental and social impacts of food waste generated by the company. As part of the report, they’re asking Amazon to study the feasibility of setting new goals for reducing food waste and working toward them. Special shareholder meetings: The shareholders behind this resolution wanted to amend Amazon’s bylaws to make it easier to call special shareowner meetings. Specifically, they want to give shareholders with an aggregate of 20 percent of the company’s outstanding stock that authority. Amazon currently only allows shareholders with 30 percent of company shares to call a special meeting, according to the resolution. Facial recognition: This resolution would prevent Amazon from selling its controversial facial recognition technology to government agencies without board approval. A separate resolution asks Amazon’s board to commission an independent study of the technology on the potential threats to civil liberties that it poses. Hate speech: Investors want Amazon to issue a report on its efforts to address products in its marketplace that promote hate speech and violence. Independent board chair: Shareholders asked the board to appoint an independent chair to replace Jeff Bezos, who serves as chair and CEO. “We believe the combination of these two roles in a single person weakens a corporation’s governance, which can harm shareholder value,” the resolution says. Sexual harassment: This resolution asked Amazon management to review the company’s sexual harassment policies to assess whether new standards should be implemented. Climate change: Shareholders asked Amazon to prepare a report describing its plan to reduce fossil fuel dependence and prepare for disruptions caused by the climate crisis. The resolution has garnered support from more than 7,600 Amazon employees. Board diversity: Under this resolution, Amazon’s board of directors would disclose their own “skills, ideological perspectives, and experience” and describe the minimum requirements for new board nominees. The goal is to ensure the board represents a diverse set of ideas and backgrounds. Pay equity: Shareholders want Amazon to report on the company’s global median gender pay gap. “A report adequate for investors to assess company strategy and performance would include the percentage global median pay gap between male and female employees across race and ethnicity, including base, bonus and equity compensation,” the resolution says. Executive compensation: This proposal asked the board to study whether it would be feasible to use environmental and social responsibility metrics when determining compensation for senior executives. Vote-counting: Shareholders asked Amazon’s board of directors to change corporate governance rules so that all resolutions are decided by a simple majority vote. About 50 members of the group Amazon Employees for Climate Justice attended the event, representing the staffers who signed a letter for climate policy. Employees put forth a proposal at the meeting requesting the Amazon’s board of directors to accept it and take action. But the board advised shareholders to vote against it, and as a result the proposal was not passed by the shareholders. Inside the meeting, a group of Amazon employees asked the company to take action on climate change, they all stood in white shirts in support of the resolution. https://twitter.com/AMZNforClimate/status/1131253997933719552 After the proposal failed to pass, employees attempted to address Amazon CEO Jeff Bezos directly during the Q&A session. Emily Cunningham, an Amazon UX designer and an organizer of the climate change initiative group, asked Bezos to come on stage to hear the proposal before making her case. Bezos chose to not appear at that point instead David Zapolsky, Amazon’s General Counsel responded to Emily. “Without bold, rapid action, we will lose our only chance to avoid catastrophic warming. There’s no issue more important to our customers, to our world, than the climate crisis, and we are falling fall short,” Emily said in her speech. She added, “Our home, Planet Earth, not distant far off places in space, desperately needs bold leadership. We have the talent, the passion, the imagination. We have the scale, speed and resources. Jeff, all we need is your leadership.” https://twitter.com/emahlee/status/1131286074393677824 “Jeff remained off-stage, ignored the employees and would not speak to them,” the group said in a statement after the event. “Jeff’s inaction and lack of meaningful response underscore his dismissal of the climate crisis and spoke volumes about how Amazon’s board continues to de-prioritize addressing Amazon’s role in the climate emergency.” https://twitter.com/AMZNforClimate/status/1131312970594521088 “Amazon has the scale and resources to spark the world’s imagination and lead the way on addressing the climate crisis,” said Jamie Kowalski, a software engineer who co-filed the resolution and attended the shareholder meeting. “What we’re missing is leadership from the very top of the company.” The climate proposal requested a report outlining how Amazon “is planning for disruptions posed by climate change” and “reducing company-wide dependence on fossil fuels”, citing Amazon’s coal-powered data centers and the amount of gasoline burnt for package deliveries. At a press conference following the shareholder meeting, the employees suggested Amazon should put forth a timeline for reaching a zero emission goal. The proposal also put forth data of other tech giants who had released reports on their contributions to climate change and have committed to addressing concerns. For example, Microsoft since 2012 has pledged to decrease its operational carbon emissions 75% by 2030. Google has been carbon neutral since 2007. In its response to the proposal, Amazon’s board noted it has committed to reaching a net zero carbon footprint on 50% of shipments by 2030. Amazon also has a plan to power its global infrastructure, including Amazon Web Services (AWS), with sustainable energy. Other cloud providers including Google and Azure offset energy usage for hosting to reach a zero carbon footprint while AWS does not. The board said it agreed that “planning for potential disruptions posed by climate change and reducing company-wide dependence on fossil fuels are important” but defended the stance by saying Amazon was already doing this and suggested shareholders vote against the proposal. Later, during the press conference of the meeting, one of the employees asked Bezos directly if he would support initiatives to address climate change. Bezos said in a response, “That’s a very important issue. It’s hard to find an issue that is more important than climate change. … It’s also as everyone knows, a very difficult problem.” He added: “Both e-commerce and cloud computing are inherently more efficient than their alternatives. So we’re doing a lot even intrinsically. But that’s not what I’m talking about in terms of the initiatives we’re taking.” He cited wind, solar and other projects. “There are a lot of initiatives here underway, and we’re not done, we’ll think of more, we’re very inventive,” he concluded. The employee group said in the press conference that the board’s stance on the proposal made it difficult to pass. And they further insisted they would continue to pressure Amazon. “Because the board still does not understand the severity of the climate crisis, we will file this resolution again next year,” said Weston Fribley, another software engineer who co-filed the resolution. “We will announce other actions in the coming months. We – Amazon’s employees – have the talent and experience to remake entire industries with incredible speed. This is work we want to do.” Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 3784

article-image-apple-proposes-a-privacy-focused-ad-click-attribution-model-for-counting-conversions-without-tracking-users
Bhagyashree R
23 May 2019
5 min read
Save for later

Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users

Bhagyashree R
23 May 2019
5 min read
Yesterday, Apple announced a new ad attribution model, which aims to hit the right balance between online user privacy and enabling advertisers to measure the effectiveness of their ad campaigns. This model, named Privacy Preserving Ad Click Attribution, is implemented in WebKit and is offered as an experimental feature in Safari Technology Preview 82+. Ad attribution model and its privacy concerns Online advertising is one of the most effective media for businesses to expand their reach and find new customers. And, ad click attribution model allows you to analyze which of your many advertising campaigns or marketing channels are leading to actual conversions. Generally, ad attribution is done through cookies and something called “tracking pixels”. Cookies are small data files stored by your browser to remember stateful information, for instance, items added in the shopping cart in an online store. A tracking pixel is basically a piece of HTML code which is loaded when a user visits a website or opens an email. If proper privacy protections are not employed, websites can use this data for user profiling. What is worse is that this data can also be sent to third parties like data brokers, affiliate networks, and advertising networks. This collection of browsing data across multiple websites is what is referred to as cross-site tracking. How Apple’s ad attribution aims to help Apple’s ad attribution model is built directly into the browser and runs on-device. This will ensure that the browser vendor will not be able to see what advertisements are being clicked or what purchases are being made. The ‘Privacy Preserving Ad Click Attribution model’ works in three steps: Storing ad clicks According to Apple's alternate Privacy Preserving Ad Click Attribution, the page hosting the ad will be responsible for storing the ad clicks. It will do this via two optional attributes: ‘adDestination’ and ‘adCampaignID’. The ‘adDestination’ attribute is the domain the ad click is navigating the user to, and ‘adCampaignID’ is the identifier of the ad campaign. Neither the browser vendor nor the website will be allowed to read the stored ad click data or detect that it exists. This data will be stored for a limited time and in the case of WebKit, it is 7 days. Matching the conversions against stored ad clicks The second step of matching the conversions against stored ad clicks will allow advertisers to understand which of their ad campaigns are the most effective ones. Conversion is basically getting the user to perform the desired action according to your advertisement, for instance, a customer adding an item to the shopping cart or signing up for a new service. In this model, tracking pixels are used as a way to determine what all actions are taken by the user benefitting the business. Data like the location of the user, time of day, the value of the conversion, or some other relevant data are passed to the browser through different parameters. Apple ensures that no sensitive data like names, addresses, or other are stored. Sending out ad click attribution data In the last step, the browser reports to the website or marketer the existence of the conversion. After the conversion is matched to an ad, the browser will set a timer at random between 24 to 48 hours to send a stateless POST request to the advertiser. And, within this time it will pass the ad campaign and other parameters to the advertiser. Apple is previewing this model in Safari Technology Preview 82+. It is also proposing this model as a standard through the W3C Web Platform Incubator Community Group (WICG). The model has received mixed reaction from users. Some think that this model can help in reducing online tracking. A Reddit user supporting the initiative said, “Ad companies are not having trouble attributing campaigns. The problem is that small, uncoordinated "privacy" features cause Ad Tech companies to become far more aggressive in how they track users. It's not the companies that lose here, it's you. A standardized, privacy-centric method for companies to accomplish attribution will help end the arms race and move back to a more consumer-friendly model. Small edges are worth a fortune in Ads. This is like the war on drugs. Clamping down and assuming ad companies will walk away is way too optimistic. Instead, they will move deeper into the shadows at whatever the cost.” Others think that it is not a browser’s responsibility to help online advertisement and should be on the users’ side. “I certainly have never wanted my browser to report ad click attribution,” another Redditor remarked. Read the full announcement by Apple for more details. Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case Apple plans to make notarization a default requirement in all future macOS updates
Read more
  • 0
  • 0
  • 4013
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubecon-cloudnativecon-eu-2019-highlights-microsofts-service-mesh-interface-enhancements-to-gke-virtual-kubelet-1-0-and-much-more
Savia Lobo
22 May 2019
7 min read
Save for later

KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!

Savia Lobo
22 May 2019
7 min read
The KubeCon+CloudNativeCon 2019 is live (May 21- May 23) at the Fira Gran Via exhibition center in Barcelona, Spain. This conference has a huge assemble of announcements for topics including Kubernetes, DevOps, and cloud-native application. There were many exciting announcements from Microsoft, Google, The Cloud Native Computing Foundation, and more!! Let’s have a brief overview of each of these announcements. Microsoft Kubernetes Announcements: Service Mesh Interface(SMI), Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and Helm 3 alpha Service Mesh Interface(SMI) Microsoft launched the Service Mesh Interface (SMI) specification, the company’s new community project for collaboration around Service Mesh infrastructure. SMI defines a set of common, portable APIs that provide developers with interoperability across different service mesh technologies including Istio, Linkerd, and Consul Connect. The Service Mesh Interface provides: A standard interface for meshes on Kubernetes A basic feature set for the most common mesh use cases Flexibility to support new mesh capabilities over time Space for the ecosystem to innovate with mesh technology To know more about the Service Mesh Interface, head over to Microsoft’s official blog. Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and first alpha of Helm 3 Microsoft released its Visual Studio Code’s open source Kubernetes extension version 1.0. The extension brings native Kubernetes integration to Visual Studio Code, and is fully supported for production management of Kubernetes clusters. Microsoft has also added an extensibility API that makes it possible for anyone to build their own integration experiences on top of Microsoft’s baseline Kubernetes integration. Microsoft also announced Virtual Kubelet 1.0. Brendan Burns, Kubernetes cofounder and Microsoft distinguished engineer said, “The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. We developed it and in the context of the Cloud Native Computing Foundation, where it’s a sandbox project.” He further added, “With 1.0, we’re saying ‘It’s ready.’ We think we’ve done all the work that we need in order for people to take production level dependencies on this project.” Microsoft also released the first alpha of Helm 3. Helm is the defacto standard for packaging and deploying Kubernetes applications. Helm 3 is simpler, supports all the modern security, identity, and authorization features of today’s Kubernetes. Helm 3 allows users to revisit and simplify Helm’s architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs). Know more about Helm 3 in detail on Microsoft’s official blog post. Google announces enhancements to Google Kubernetes Engine; Stackdriver Kubernetes Engine Monitoring ‘generally available’ On the first day of the KubeCon+CloudNative Con 2019, yesterday, Google announced the three release channels for its Google Kubernetes Engine (GKE), Rapid, Regular and Stable. Google, in its official blog post states, “Each channel offers different version maturity and freshness, allowing developers to subscribe their cluster to a stream of updates that match risk tolerance and business requirements.” This new feature will be launched into alpha with the first release in the Rapid channel, which will give developers early access to the latest versions of Kubernetes. Google also announced the general availability of Stackdriver Kubernetes Engine Monitoring, a tool that gives users a GKE observability (metrics, logs, events, and metadata) all in one place, to help provide faster time-to-resolution for issues, no matter the scale. To know more about the three release channels and the Stackdriver Kubernetes Engine Monitoring in detail, head over to Google’s official blog post. Cloud Native Foundation announcements: Announcing Harbor 1.8,  launches a new online course ‘Cloud Native Logging with Fluentd’, Intuit Inc. wins the CNCF End User Award, and Kong Inc. is now a Gold Member Harbor 1.8 The VMWare team released Harbor 1.8, yesterday, with new features and improvements, including enhanced automation integration, security, monitoring, and cross-registry replication support. Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor 1.8 also brings various  other capabilities for both administrators and end users: Health check API, which shows detailed status and health of all Harbor components. Harbor extends and builds on top of the open source Docker Registry to facilitate registry operations like the pushing and pulling of images. In this release, we upgraded our Docker Registry to version 2.7.1 Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection, and replication jobs are all supported. API explorer integration. End users can now explore and trigger Harbor’s API via the Swagger UI nested inside Harbor’s UI. Enhancement of the Job Service engine to include internal webhook events, additional APIs for job management, and numerous bug fixes to improve the stability of the service. To know more about this release, read Harbor 1.8 official blogpost. A new online course on ‘Cloud Native Logging with Fluentd’ The Cloud Native Computing Foundation and The Linux Foundation have together designed a new, self-paced and hands-on course Cloud Native Logging with Fluentd. This course will provide users with the necessary skills to deploy Fluentd in a wide range of production settings. Eduardo Silva, Principal Engineer at Arm Treasure Data, said, “This course will explore the full range of Fluentd features, from installing Fluentd and running it in a container, to using it as a simple log forwarder or a sophisticated log aggregator and processor.” “As we see the Fluentd project growing into a full ecosystem of third party integrations and components, we are thrilled that this course will be offered so more people can realize the benefits it provides”, he further added. To know more about this course and its benefits in detail, visit the official blogpost. Intuit Inc. won the CNCF End User Award At the conference, yesterday, CNCF announced that Intuit Inc. has won the CNCF End User Award in recognition of its contributions to the cloud native ecosystem. Intuit is an active user, contributor and developer of open source technologies. As a part of its journey to the public cloud, Intuit has advanced the way it leverages cloud native technologies in production, including CNCF projects like Kubernetes and OPA. To know more about this achievement by Intuit in detail, read the official blog post. Kong Inc. is now a Gold Member of the CNCF The CNCF announced that Kong Inc., which provides open source API and service lifecycle management tool has upgraded its membership to Gold. The company backs the Kong project, a cloud native, fast, scalable and distributed microservice abstraction layer. Kong Kong is focused on building a service control platform that acts as the nervous system for an organization’s modern software architectures by intelligently brokering information across all services. Dan Kohn, Executive Director of the Cloud Native Computing Foundation, said, “With their focus on open source and cloud native, Kong is a strong member of the open source community and their membership provides resources for activities like bug bounties and security audits that help our community continue to thrive.” Head over to CNCF’s official announcement post. More announcements can be expected from this conference, to stay updated visit KubeCon+CloudNativeCon 2019 official website. F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more RSA Conference 2019 Highlights: Top 5 cybersecurity products announced NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference
Read more
  • 0
  • 0
  • 3410

article-image-5-developers-explain-why-they-use-visual-studio-code-sponsored-by-microsoft
Richard Gall
22 May 2019
7 min read
Save for later

5 developers explain why they use Visual Studio Code [Sponsored by Microsoft]

Richard Gall
22 May 2019
7 min read
Visual Studio Code has quickly become one of the most popular text editors on the planet. While debate will continue to rage about the relative merits of every text editor, it’s nevertheless true that Visual Studio Code is unique in that it is incredibly customizable: it can be as lightweight as a text editor or as feature-rich as an IDE. This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free from Microsoft here. Try Visual Studio Code yourself. Learn more here. This means the range of developers using Visual Studio Code are incredibly diverse. Each one faces a unique set of challenges alongside their personal preferences. I spoke to a few of them about why they use Visual Studio Code and how they make it work for them. “Visual Studio Code is streamlined and flexible” Ben Sibley is the Founder of Complete Themes. He likes Visual Studio Code because it is relatively lightweight while also offering considerable flexibility. “I love how streamlined and flexible Visual Studio Code is. Personally, I don’t need a ton of functionality from my IDE, so I appreciate how simple the default configuration is. There's a very concise set of features built-in like the Git integration. “I was using PHPStorm previously and while it was really feature-rich, it was also overwhelming at times. VSC is faster, lighter, and with the extension market you can pick and choose which additional tools you need. And it’s a popular enough editor that you can usually find a reliable and well-reviewed extension.” Read next: How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsored by Microsoft] “Visual Studio Code is the best in terms of extension ecosystem, language support and configuration” Libby Horacek is a developer at Position Development. She has worked with several different code editors but struggled to find one that allowed her to effectively move between languages. For Libby, Visual Studio Code offered the right level of flexibility. She also explained how the team at Position Development have used VSC’s Live Share feature which allows developers to directly share and collaborate on code inside their editor. “I currently use Visual Studio Code. I’ve tried a LOT of different editors. I’m a polyglot developer, so I need an editor that isn’t just for one language. RubyMine is great for Ruby, and PyCharm is good for Python, but I don’t want to switch editors every time I switch languages (sometimes multiple times a day). My main constraint is Haskell language support — there are plugins for most IDEs now, but some are better than others. “For a long time I used Emacs just because I was able to steal a great configuration setup for it from a coworker, but a few months back it stopped working due to updates and I didn’t want to acquire the Emacs expertise to fix it. So I tried IntelliJ, Visual Studio, Atom, Sublime Text, even Vim… but in the end I liked Visual Studio the best in terms of extension ecosystem, language support, and ease of use and configuration. “My team also uses Visual Studio’s Live Share for pairing. I haven’t tried it personally but it looks like a great option for remote pairing. The only thing my coworkers have cautioned is that they encountered a bug with the “undo” functionality that wiped out most of a file they were working on. Maybe that bug has been fixed by now, but as always, commit early and commit often!” “As a JavaScript dev shop, we love that VSC is written in JavaScript” Cody Swann is the CEO of Gunner Technology, a software development company that builds using JavaScript on AWS for both the public and private sector. “All our developers here [at Gunner Technology] use VSC. “We switched from Sublime about two years ago because Sublime started to feel slow and neglected. “Before that, we used TextMate and abandoned that for the same reasons. “As a JavaScript dev shop, we love that VSC is written in JavaScript. It makes it easier for us to write in-house extensions and such. “Additionally, we love that Microsoft releases monthly updates and keeps improving performance.” Read next: Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity “The Visual Studio Code team pay close attention to the problems developers face” Ajeet Dhaliwal is a software developer at Tesults. He explains he has used several different IDEs and editors but came to Visual Studio Code after spending some time using Node.js and React on Brackets. “I have used Visual Studio Code almost exclusively for the last couple of years. “In years prior to making this switch, the nature of the development work that I did meant that I was broadly limited to using specific IDEs such as Visual Studio and Xcode. Then in 2014 I stated to get into Node.js and was looking for a code editor that would be more suitable. I tried out a few and ultimately settled on Brackets. “I used Brackets for a while but wasn’t always happy with it. The most annoying issue was the way text was rendered on my Mac. “Over time I started doing React work too and every time I revisited VSC the improvements were impressive, it seemed to me that the developers were closely paying attention to the problems developers face, they were creating features I had never even thought I would need and the extensions added highly useful features for Node.js and React dev work. The font rendering was not an issue either so it became an inevitable switch.” “I have to context switch regularly - I expect my brain to be the slowest element, not the IDE” Kyle Balnave is Senior Developer and Squad Manager at High Speed Training. Despite working with numerous editors and IDEs, he likes Visual Studio Code because it allows him to move between different contexts incredibly quickly. Put simply, it allows him to work faster than other IDEs do. "I've used several different editors over the years. They generally fall under two categories: Monolithic (I can do anything you'll ever want to do out of the box). Modular (I do the basics but allow extensions to be added to do most the rest). “The former are IDEs like Netbeans, IntelliJ and Visual Studio. In my experience they are slow to load and need a more powerful development machine to keep responsive. They have a huge range of functionality, but in everyday development I just need it to be an intelligent code editor. “The latter are IDEs like Eclipse, Visual Studio Code, Atom. They load quickly, respond fast and have a wide range of extensions that allow me to develop what I need. They sometimes fall short in their functionality, but I generally find this to be infrequent. “Why do I use VSCode? Because it doesn't slow me down when I code. I have to context switch regularly so I expect my own brain to be the slowest element, not the IDE. Learn how to develop with Node.js on Azure by downloading Learning Node.js with Azure for free, courtesy of Microsoft.
Read more
  • 0
  • 0
  • 6210

article-image-how-change-org-uses-flow-elixirs-library-to-build-concurrent-data-pipelines-that-can-handle-a-trillion-messages
Sugandha Lahoti
22 May 2019
7 min read
Save for later

How Change.org uses Flow, Elixir’s library to build concurrent data pipelines that can handle a trillion messages

Sugandha Lahoti
22 May 2019
7 min read
Last month, at the ElixirConf EU 2019, John Mertens, Principal Engineer at Change.org conducted a session - Lessons From Our First Trillion Messages with Flow for developers interested in using Elixir for building data pipelines in the real-world system. For many Elixir converts, the attraction of Elixir is rooted in the promise of the BEAM concurrency model. The Flow library has made it easy to build concurrent data pipelines utilizing the BEAM (Originally BEAM was short for Bogdan's Erlang Abstract Machine, named after Bogumil "Bogdan" Hausman, who wrote the original version, but the name may also be referred to as Björn's Erlang Abstract Machine, after Björn Gustavsson, who wrote and maintains the current version). The problem is, that while the docs are great, there are not many resources on running Flow-based systems in production. In his talk, John shares some lessons his team learned from processing their first trillion messages through Flow. Using Flow at Change.org Change.org is a platform for social change where people from all over the world come to start movements on all topics and of all sizes. Technologically, change.org is primarily built in Ruby and JavaScript but they started using Elixir in early 2018 to build a high volume mission-critical data processing pipeline. They used Elixir for building this new system because of its library, Flow. Flow is a library for computational parallel flows in Elixir. It is built on top of GenStage.  GenStage is “specification and computational flow for Elixir”, meaning it provides a way for developers to define a pipeline of work to be carried out by independent steps (or stages) in separate processes. Flow allows developers to express computations on collections, similar to the Enum and Stream modules, although computations will be executed in parallel using multiple GenStages. At Change.org, the developers built some proofs of concept and a few different languages and put them against each other with the two main criteria being performance and developer happiness. Elixir came out as the clear winner. Whenever on change.org an event gets added to a queue, their elixir system pulls these messages off the queue, then prep and transforms them to some business logic and generate some side effects. Next, depending on a few parameters, the messages are either passed on to another system, discarded or retried. So far things have gone smoothly for them, which brought John to discuss lessons from processing their first trillion messages with Flow. Lesson 1: Let Flow do the work Flow and GenStage both are great libraries which provide a few game-changing features by default. The first being parallelism. Parallelism is beneficial for large-scale data processing pipelines and Elixir flows abstractions make utilizing Parallelism easier. It is as easy as writing code that looks essentially like standard Elixir pipeline but that utilizes all of your CPU cores. The second feature of Flow is Backpressure. GenStage specifies how Elixir processes should communicate with back-pressure. Simply put, Backpressure is when your system asks for more data to process instead of data being pushed on to it. With Flow your data processing is in charge of requesting more events. This means that if your process is dependent on some other service and that service becomes slow your whole flow just slows down accordingly and no service gets overloaded with requests, making the whole system stay up. Lesson 2: Organize your Flow The next lesson is on how to set up your code to take advantage of Flow. These organizational tactics help Change.org keep their Flow system manageable in practice. Keep the Flow Simple The golden rule according to John, is to keep your flow simple. Start simple and then increase the complexity depending on the constraints of your system. He discusses a quote from the Flow Docs, which states that: [box type="shadow" align="" class="" width=""]If you can solve a problem without using partition at all, that is preferred. Those are typically called embarrassingly parallel problems.[/box] If you can shape your problem into an embarrassingly parallel problem, he says, flow can really shine. Know your code and your system He also advises that developers should know their code and understand their systems. He then proceeds to give an example of how SQS is used in Flow. Amazon SQS (Simple Queue System) is a message-queuing system (also used at Change. org) that allows you to write distributed applications by exposing a message pipeline that can be processed in the background by workers. It’s two main features are the visibility window and acknowledgments. In acknowledgments, when you pull a message off a queue you have a set amount of time to acknowledge that you've received and processed that message and that amount of time is called the visibility window and that's configurable. If you don't acknowledge the message within the visibility window, it goes back into the queue. If a message is pulled and not acknowledged, a configured number of times then it is either discarded or sent to a dead letter queue. He then proceeds to use an example of a Flow they use in production. Maintain a consistent data structure You should also use a consistent data structure or a token throughout the data pipeline. The data structure most essential to their flow at Change.org is message struct -  %Message{}. When a message comes in from SQS, they create a message struct based on it. The consistency of having the same data structure at every step and the flow is how they can keep their system simple. He then explains an example code on how they can handle different types of data while keeping the flow simple. Isolate the side effects The next organizational tactic that helps Change.org employ to keep their Flow system manageable in practice is to isolate the side effects. Side effects are mutations; if something goes wrong in a mutation you need to be able to roll it back. In the spirit of keeping the flow simple, at Change.org, they batch all the side-effects together and put them at the ends so that a nothing gets lost if they need to roll it back. However, there are certain cases where you can’t put all side effects together and need a different strategy. These cases can be handled using Flow Sagas. Sagas pattern is a way to handle long live transactions providing rollback instructions for each step along the way so in case it goes bad it can just run that function. There is also an elixir implementation of sagas called Sage. Lesson 3: Tune the flow How you optimize your Flow is dependent upon the shape of your problem. This means tailoring the Flow to your own use case to squeeze all the throughput. However, there are three things which you can do to shape your Flow. This includes measuring flow performance, what are the things that we can actually do to tune it and then how can we help outside of the Flow. Apart from the three main lessons on data processing through Flow, John also mentions a few others, namely Graceful Producer Shutdowns Flow-level integration tests Complex batching Finally, John gave a glimpse of Broadway from change.org’s codebase. Broadway allows developers to build concurrent and multi-stage data ingestion and data processing pipelines with Elixir. It takes the burden of defining concurrent GenStage topologies and provides a simple configuration API that automatically defines concurrent producers, concurrent processing, batch handling, and more, leading to both time and cost efficient ingestion and processing of data. Some of its features include back-pressure automatic acknowledgments at the end of the pipeline, batching, automatic restarts in case of failures, graceful shutdown, built-in testing, and partitioning. José Valim’s keynote at the ElixirConf2019 also talked about streamlining data processing pipelines using Broadway. You can watch the full video of John Mertens’ talk here. John is the principal scientist at Change.org using Elixir to empower social action in his organization. Why Ruby developers like Elixir Introducing Mint, a new HTTP client for Elixir Developer community mourns the loss of Joe Armstrong, co-creator of Erlang
Read more
  • 0
  • 0
  • 3352

article-image-getting-started-with-designing-restful-apis
Vincy Davis
21 May 2019
9 min read
Save for later

Getting started with designing RESTful APIs

Vincy Davis
21 May 2019
9 min read
The application programmable interface (API) is one of the most promising software paradigms to address anything, anytime, anywhere, and any device, which is the one substantial need of the digital world at the moment. This article discusses how APIs and API designs help to address those challenges and bridge the gaps. It discusses a few essential API design guidelines, such as consistency, standardization, re-usability, and accessibility through REST interfaces, which could equip API designers with better thought processes for their API modeling. This article is an excerpt taken from the book, 'Hands-On RESTful API Design Patterns and Best Practices' written by Harihara Subramanian and Pethura Raj. In this article, you will understand the various design rules of RESTful APIs including the use of Uniform Resource Identifiers, URI authority, Resource modelling and many more. Goals of RESTful API design APIs are straightforward, unambiguous, easy to consume, well-structured, and most importantly accessible with well-known and standardized HTTP methods. They are one of the best possible solutions for resolving many digitization challenges out of the box. The following are the basic API design goals: Affordance Loosely coupled Leverage existing web architecture RESTful API design rules The best practices and design principles are guidelines that API designers try to incorporate in their API design. So for making the API design RESTFUL, certain rules are followed such as  the following: Use of Uniform Resource Identifiers URI authority Resource modelling Resource archetypes URI path URI query Metadata design rules (HTTP headers and returning error codes) and representations It will be easier to design and deliver the finest RESTful APIs, if we understand these design rules. Uniform Resource Identifiers REST APIs should use Uniform Resource Identifiers (URIs) to represent their resources. Their indications should be clear and straightforward so that they communicate the APIs resources crisp and clearly: A sample of a simple to understand URI is https://xx.yy.zz/sevenwonders/tajmahal/india/agra, as you may observe that the emphasized texts clearly indicates the intention or representation A harder to understand URI is https://xx.yy.zz/books/36048/9780385490627; in this sample, the text after books is very hard for anyone to understand So having simple, understandable representation in the URI is critical in RESTful API design. URI formats The syntax of the generic URI is scheme "://" authority "/" path [ "?" query ] [ "#" fragment ] and following are the rules for API designs: Use forward slash (/) separator Don't use a trailing forward slash Use hyphens (-) Avoid underscores (_) Prefer all lowercase letters in a URI path Do not include file extensions REST API URI authority As we've seen different rules for URIs in general, let's look at the authority (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URI: Use consistent sub-domain names: As you see in http://api.baseball.restfulapi.org, the API domain should have api as part of its sub-domainConsistent sub-domain names for an API include the following: The top-level domain and the first sub-domain names indicate the service owner and an example could be baseball.restfulapi.org Consistent sub-domain names for a developer portal  include the following: As we saw in the API playgrounds section, the API providers should have exposed sites for APP developers to test their APIs called a developer portal. So, by convention, the developer portal's sub-domain should have developer in it. An example of a sub-domain with the developer for a developer portal would be http://developer.baseball.restfulapi.org. Resource modelling Resource modeling is one of the primary aspects for API designers as it helps to establish the APIs fundamental concepts. In general, the URI path always convey REST resources, and each part of the URI is separated by a forward slash (/) to indicate a unique resource within it model's hierarchy. Each resource separated by a forward slash indicates an addressable resource, as follows: https://api-test.lufthansa.com/v1/profiles/customers https://api-test.lufthansa.com/v1/profiles https://api-test.lufthansa.com Customers, profiles, and APIs are all unique resources in the preceding individual URI models. So, resource modelling is a crucial design aspect before designing URI paths. Resource archetypes Each service provided by the API is an archetype, and they indicate the structures and behaviors of REST API designs. Resource modelling should start with a few fundamental resource archetypes, and usually, the REST API is composed of four unique archetypes, as follows: Document Collection Stores Controller URI path This section discusses rules relating to the design of meaningful URI paths (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URIs. The following are the rules about URI paths: Use singular nouns for document names, for example, https://api-test.lufthansa.com/v1/profiles/customers/memberstatus. Use plural nouns for collections and stores: Collections: https://api-test.lufthansa.com/v1/profiles/customers Stores: https://api-test.lufthansa.com/v1/profiles/customers/memberstatus/prefernces As controller names represent an action, use a verb or verb phrase for controller resources. An example would be https://api-test.lufthansa.com/v1/profiles/customers/memberstatus/reset Do not use CRUD function names in URIs: Correct URI example: DELETE /users/1234 Incorrect URIs: DELETE /user-delete /1234, and POST /users/1234/delete URI query These are the rules relating to the design of the query (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URIs. The query component of the URI also represents the unique identification of the resource, and following are the rules about URI queries: Use the query to filter collections or stores: An example of the limit in the query: https://api.lufthansa.com/v1/operations/flightstatus/arrivals/ZRH/2018-05-21T06:30?limit=40 Use the query to paginate collection or store results: An example with the offset in the query: https://api.lufthansa.com/v1/operations/flightstatus/arrivals/ZRH/2018-05-21T06:30?limit=40&offset=10 HTTP interactions A REST API doesn't suggest any special transport layer mechanisms, and all it needs is basic Hyper Text Transfer Protocol and its methods to represent its resources over the web. We will touch upon how REST should utilize those basic HTTP methods in the upcoming sections. Request methods The client specifies the intended interaction with well-defined semantic HTTP methods, such as GET, POST, PUT, DELETE, PATCH, HEAD, and OPTIONS. The following are the rules that an API designer should take into account when planning their design: Don't tunnel to other requests with the GET and POST methods Use the GET method to retrieve a representation of a resource Use the HEAD method to retrieve response headers Use the PUT method to update and insert a stored resource Use the PUT method to update mutable resources Use the POST method to create a new resource in a collection Use the POST method for controller's execution Use the DELETE method to remove a resource from its parent Use the OPTIONS method to retrieve metadata Response status codes HTTP specification defines standard status codes, and REST API can use the same status codes to deliver the results of a client request. The status code categories and a few associated REST API rules are as follows so that the APIs can apply those rules according to the process status: 1xx: Informational: This provides protocol-level information 2xx: Success: Client requests are accepted (successfully), as in the following examples: 200: OK 201: Created 202: Accepted 204: No content 3xx: Redirection: Client requests are redirected by the server to the different endpoints to fulfill the client request: 301: Moved Permanently 302: Found 303: See other 304: Not modified 307: Temporarily redirect 4xx: Client error: Errors at client side: 400: Bad request 401: Unauthorized 403: Forbidden 404: Not found 405: Method not allowed 406: Not acceptable 409: Conflict 412: Precondition failed 415: Unsupported media type 5xx: Server error: These relate to errors at server side: 500: Internal server error Metadata design It looks at the rules for metadata designs, including HTTP headers and media types HTTP headers HTTP specifications have a set of standard headers, through which a client can get information about a requested resource, and carry the messages that indicate its representations and may serve as directives to control intermediary caches. The following points suggest a few sets of rules conforming to the HTTP standard headers: Should use content-type Should use content-length Should use last-modified in responses Should use ETag in responses Stores must support conditional PUT request Should use the location to specify the URI of newly created resources (through PUT) Should leverage HTTP cache headers Should use expiration headers with 200 ("OK") responses May use expiration caching headers with 3xx and 4xx responses Mustn't use custom HTTP headers Media types and media type design rules Media types help to identify the form of the data in a request or response message body, and the content-type header value represents a media type also known as the Multipurpose Internet Mail Extensions (MIME) type. Media type design influences many aspects of a REST API design, including hypermedia, opaque URIs, and different and descriptive media types so that app developers or clients can rely on the self-descriptive features of the REST API. The following are the two rules of media type design: Uses application-specific media types Supports media type negotiations in case of multiple representations Thus we can say that: Support media type selection using a query parameter: To support clients with simple links and debugging, REST APIs should support media type selection through a query parameter named accept, with a value format that mirrors that of the accept HTTP request header An example is REST APIs should prefer a more precise and generic approach as following media type, using the GET https://swapi.co/api/planets/1/?format=json query parameter identification over the other alternatives Summary We have briefly discussed the goals of RESTful API design and how API designers need to follow design principles and rules so that you can create better RESTful APIs. To know more about the rules for most common resource formats, such as JSON and hypermedia, and error types, in brief, client concerns, head over to the book, 'Hands-On RESTful API Design Patterns and Best Practices'. Svelte 3 releases with reactivity through language instead of an API Get to know ASP.NET Core Web API [Tutorial] Implement an API Design-first approach for building APIs [Tutorial]
Read more
  • 0
  • 0
  • 5659
article-image-approx-250-public-network-users-affected-during-stack-overflows-security-attack
Vincy Davis
20 May 2019
4 min read
Save for later

Approx. 250 public network users affected during Stack Overflow's security attack

Vincy Davis
20 May 2019
4 min read
In a security update released on May 16, StackOverflow confirmed that “some level of their production access was gained on May 11”. In a recent “Update to Security Incident” post, Stack Overflow provides further details of the security attack including the actual date and duration of the attack, how the attack took place, and the company’s response to this incident. According to the update, the first intrusion happened on May 5 when a build deployed for the development tier for stackoverflow.com contained a bug. This allowed the attacker to log in to their development tier as well as escalate its access on the production version of stackoverflow.com. From May 5 onwards, the intruder took time to explore the website until May 11. Post which the intruder made changes in the Stack Overflow system to obtain a privileged access on production. This change was identified by the Stack Overflow team and led to immediately revoking their network-wide access and also initiating an investigation on the intrusion. As part of their security procedure to protect sensitive customer data, Stack Overflow maintains separate infrastructure and network for their clients of Teams, Business, and Enterprise products. They have not found any evidence to these systems or customer data being accessed. The Advertising and Talent businesses of Stack Overflow were also not impacted. However, the team has identified some privileged web request that the attacker had made, which might have returned an IP address, names, or emails of approximately 250 public network users of Stack Exchange. These affected users will be notified by Stack Overflow. Steps taken by Stack Overflow in response to the attack Terminated the unauthorized access to the system. Conducted an extensive and detailed audit of all logs and databases that they maintain, which allowed them to trace the steps and actions that were taken. Remediated the original issues that allowed unauthorized access and escalation. Issued a public statement proactively. Engaged third-party forensics and incident response firm to assist with both remediation and learnings of Stack Overflow. Have taken precautionary measures such as cycling secrets, resetting company passwords, and evaluating systems and security levels. Stack Overflow has again promised to provide more public information after their investigation cycle concludes. Many developers are appreciating the quick confirmation, updates and the response taken by Stack Overflow in this security attack incident. https://twitter.com/PeterZaitsev/status/1129542169696657408 A user on Hacker news comments, “I think this is one of the best sets of responses to a security incident I've seen: Disclose the incident ASAP, even before all facts are known. The disclosure doesn't need to have any action items, and in this case, didn't Add more details as investigation proceeds, even before it fully finishes to help clarify scope The proactive communication and transparency could have downsides (causing undue panic), but I think these posts have presented a sense that they have it mostly under control. Of course, this is only possible because they, unlike some other companies, probably do have a good security team who caught this early. I expect the next (or perhaps the 4th) post will be a fuller post-mortem from after the incident. This series of disclosures has given me more confidence in Stack Overflow than I had before!” Another user on Hacker News added, “Stack Overflow seems to be following a very responsible incident response procedure, perhaps instituted by their new VP of Engineering (the author of the OP). It is nice to see.” Read More 2019 Stack Overflow survey: A quick overview Bryan Cantrill on the changing ethical dilemmas in Software Engineering Listen to Uber engineer Yuri Shkuro discuss distributed tracing and observability [Podcast]
Read more
  • 0
  • 0
  • 2508

article-image-understanding-advanced-patterns-in-restful-api-tutorial
Vincy Davis
20 May 2019
11 min read
Save for later

Understanding advanced patterns in RESTful API [Tutorial]

Vincy Davis
20 May 2019
11 min read
Every software designer agrees that design patterns, and solving familiar yet recurring design problems by implementing design patterns, are inevitable in the modern software design-and-development life cycle. These advanced patterns will help developers with the best-possible RESTful services implementation. This article is an excerpt taken from the book, 'Hands-On RESTful API Design Patterns and Best Practices' written by Harihara Subramanian and Pethura Raj. In this book, design strategy, essential and advanced Restful API Patterns, Legacy Modernization to Micro services-centric apps are covered. This article will help you understand the advanced patterns in RESTful API including Versioning, Authorization, Uniform contract, Entity endpoints, and many more. Versioning The general rules of thumb we'd like to follow when versioning APIs are as follows: Upgrade the API to a new major version when the new implementation breaks the existing customer implementations Upgrade the API to a new minor version of the API when the new implementation provides enhancements and bug fixes; however, ensure that the implementation takes care of backward-compatibility and has no impact on the existing customer implementations There are four different ways that we can implement versioning in our API: Versioning through the URI path The major and minor version changes can be a part of the URI, for example, to represent v1 or v2 of the API the URI can be http://localhost:9090/v1/investors or http://localhost:9090/v2/investors, respectively. The URI path versioning is a popular way of managing API versions due to its simple implementation. Versioning through query parameters The other simple method for implementing the version reference is to make it part of the request parameters, as we see in the following examples—http://localhost:9090/investors?version=1, http://localhost:9090/investors?version=2.1.0: @GetMapping("/investors") public List<Investor> fetchAllInvestorsForGivenVersionAsParameter( @RequestParam("version") String version) throws VersionNotSupportedException { if (!(version.equals("1.1") || version.equals("1.0"))) { throw new VersionNotSupportedException("version " + version); } return investorService.fetchAllInvestors(); } Versioning through custom headers A custom header allows the client to maintain the same URIs, regardless of any version upgrades. The following code snippet will help us understand the version implementation through a custom header named x-resource-version. Note that the custom header name can be any name; in our example, we name it x-resource-version: @GetMapping("/investorsbycustomheaderversion") public List<Investor> fetchAllInvestors...( @RequestHeader("x-resource-version") String version) throws VersionNotSupportedException { return getResultsAccordingToVersion(version); } Versioning through content-negotiation Providing the version information through the Accept (request) header along with the content-type (media) in response is the preferred way as this helps to version APIs without any impact on the URI. This is done by a code implementation of versioning through Accept and Content-Type: @GetMapping(value = "/investorsbyacceptheader", headers = "Accept=application/investors-v1+json, application/investors-v1.1+json") public List<Investor> fetchAllInvestorsForGiven..() throws VersionNotSupportedException { return getResultsAccordingToVersion("1.1"); } The right versioning is determined on a case-by-case basis. However, the content-negotiation and custom headers are a proponent of RESTful-compliant services. Authorization How do we ensure our REST API implementation is accessible only to genuine users and not to everyone? In our example, the investor's list should not be visible to all users, and the stocks URI should not be exposed to anyone other than the legitimate investor.  Here we are implementing simple basic authentication through the authorization header. The basic authentication is a standard HTTP header (RESTful API constraint compliant) with the user's credentials encoded in Base64. The credentials (username and password) are encoded in the format of username—password. The credentials are encoded not encrypted, and it's vulnerable to specific security attacks, so it's inevitable that the rest API implementing basic authentication will communicate over SSL (https). Authorization with the default key Securing the REST API with basic authentication is exceptionally simplified by the Spring security framework. Merely adding the following entries in pom.xml provides basic authentication to our investor service app: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> Now rebuild (mvn clean package) the application and restart it. It's time to test our APIs with the postman tool. When we hit the URL, unlike our earlier examples, we'll see an error complaining Full authorization required to access this resource The preceding error is due to the addition of spring-security into our pom.xml file. We can access the REST API by observing a text using the default security password or search for it in our log file. That's the key for anyone to access our API. We need to provide BasicAuth as the Authorization header for the API that we are accessing; we will see the results now without any authentication errors. Also the Authorization header that carries the XYZKL... token prefixed with Basic, as we use the HTTP Authentication header to enforce REST API authentication. Authorization with credentials In many real-time situations, we need to use specific credentials to access the API and not the default one; in such cases, we can enhance our investor service application and secure it with our custom credentials by using few additional out-of-the-box spring modules. In our investor service, we will have a new class, called PatronAuthConfig.java, which helps the app to enforce the credentials to the URLs that we would like to secure: @Configuration @EnableWebSecurity public class PatronsAuthConfig extends WebSecurityConfigurerAdapter { ..... We can implement the security with a few annotations. Uniform contract Services will always evolve with additional capabilities, enhancements, and defects fixes, however, now a service consumer can consume the latest version of our services without the need to keep changing their implementation or REST API endpoints. The uniform contract pattern comes to the rescue to overcome the problems. The pattern suggests the following measures: Standardize the service contract and make it uniform across any service endpoints Abstract the service endpoints from individual services capabilities Follow the REST principles where the endpoints use only HTTP verbs, and express the underlying resources executable actions only with HTTP verbs Entity endpoints If service clients want to interact with entities, such as investors, and their stocks without needing them to manage a compound identifier for both investor and stock, we need a pattern called entity endpoint. Entity endpoints suggest exposing each entity as individual lightweight endpoints of the service they reside in, so the service consumers get the global addressability of service entities The entity endpoints expose reusable enterprise resources, so service consumers can reuse and share the entity resources. The investor service, exposes a couple of entity endpoints, such as /investors/investorId, and investor/stockId , and they are few examples of entity endpoints that service consumer can reuse and standardize. Endpoint redirection Changing service endpoints isn't always ideal, However, if it needs to, will the service client know about it and use the new endpoint? Yes, with standard HTTP return codes, 3xx, and with the Location header, then by receiving 301 Moved permanently or 307 Temporary Redirect, the service client can act accordingly. The endpoint redirection pattern suggests returning standard HTTP headers and provides an automatic reference of stale endpoints to the current endpoints. The service consumers may call the new endpoints that are found in the Location header. Idempotent Imagine a bank's debit API failed immediately after deducting some amount from the client account. However, the client doesn't know about it and reissues the call to debit! Alas, the client loses money. So how can a service implementation handle messages/data and produce the same results, even after multiple calls? Idempotent is one of the fundamental resilience and scalable patterns, as it decouples the service implementation nodes across distributed systems. Whether dealing with data or messages, the services should always have designed for sticking to Idempotent in nature. There is a simple solution: use the idempotent capabilities of the HTTP web APIs, whereby services can provide a guarantee that any number of repeated calls due to intermittent failures of communication to the service is safe, and process those multiple calls from the server without any side effects. Bulk operation Marking a list of emails as read in our email client could be an example of a bulk operation; the customer chooses more than one email to tag as Read, and one REST API call does the job instead of multiple calls to an underlying API. The following two approaches are suggested for implementing bulk operations: Content-based bulk operation Custom-header action-identifier-based bulk operation The bulk operations may involve many other aspects, such as E-tag, asynchronous executions, or parallel-stream implementation to make it effective. Circuit breaker The circuit breaker is an automatic switch designed to protect entire electrical circuits from damage due to excess current load as a result of a short circuit or overload. The same concept applies when services interact with many other services. Failure due to any issue can potentially create catastrophic effects across the application, and preventing cascading impacts is the sole aim of a circuit-breaker pattern. Hence, this pattern helps subsystems to fail gracefully and also prevents complete system failure as a result of a subsystem failures. There are three different states that constitute the circuit breaker: Closed Open Half-open There's a new service called circuit-breaker-service-consumer, which will have all the necessary circuit-breaker implementations, along with a call to our first service. Combining the circuit pattern and the retry pattern As software designers, we understand the importance of gracefully handling application failures and failure operations. We may achieve better results by combining the retry pattern and the circuit breaker pattern as it provides the application with greater flexibility in handling failures. The retry patterns enable the application to retry failed operations, expecting those operations to become operational and eventually succeed. However, it may result in a denial of service (DoS) attack within our application. API facade API facade abstracts the complex subsystem from the callers and exposes only necessary details as interfaces to the end user. The client can call one API facade to make it simpler and more meaningful in cases where the clients need multiple service calls. However, that can be implemented with a single API endpoint instead of the client calling multiple endpoints. The API facades provide high scalability and high performance as well. The investor services have implemented a simple API facade implementation for its delete operations. As we saw earlier, the delete methods call the design for intent methods. However, we have made the design for the intent method abstract to the caller by introducing a simple interface to our investor services. That brings the facade to our API. The interface for the delete service is shown as follows: public interface DeleteServiceFacade { boolean deleteAStock(String investorId, String stockTobeDeletedSymbol); boolean deleteStocksInBulk(String investorId, List<String> stocksSymbolsList); } Backend for frontend Backend for frontend (BFF) is a pattern first described by Sam Newman; it helps to bridge any API design gaps. BFF suggests introducing a layer between the user experience and the resources it calls. It also helps API designers to avoid customizing a single backend for multiple interfaces. Each interface can define its necessary and unique requirements that cater to frontend requirements without worrying about impacting other frontend implementations. BFF may not fit in cases such as multiple interfaces making the same requests to the backend, or using only one interface to interact with the backend services. So caution should be exercised when deciding on separate, exclusive APIs/interfaces, as it warrants additional and lifelong maintenance, security improvement within layers, additional customized designs that lead to lapses in security, and defect leaks. Summary In this article, we have discussed versioning our APIs, securing APIs with authorization, and enabling the service clients with uniform contract, entity endpoint, and endpoint redirection implementations. We also learned about Idempotent and its importance, which powers APIs with bulk operations. Having covered various advanced patterns, we concluded the article with the circuit breaker and the BFF pattern. These advanced pattern's of restful API's will provide our customers and app developers with the best-possible RESTful services implementation. To know more about the rules for most common resource formats, such as JSON and hypermedia, and error types, in brief, client concerns, head over to the book, 'Hands-On RESTful API Design Patterns and Best Practices'. Inspecting APIs in ASP.NET Core [Tutorial] Google announces the general availability of a new API for Google Docs The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires
Read more
  • 0
  • 0
  • 11075

article-image-implementing-routing-with-react-router-and-graphql-tutorial
Bhagyashree R
19 May 2019
15 min read
Save for later

Implementing routing with React Router and GraphQL [Tutorial]

Bhagyashree R
19 May 2019
15 min read
Routing is essential to most web applications. You cannot cover all of the features of your application in just one page. It would be overloaded, and your user would find it difficult to understand. Sharing links to pictures, profiles, or posts is also very important for a social network such as Graphbook. It is also crucial to split content into different pages, due to search engine optimization (SEO). This article is taken from the book Hands-on Full-Stack Web Development with GraphQL and React by Sebastian Grebe. This book will guide you in implementing applications by using React, Apollo, Node.js, and SQL. By the end of the book, you will be proficient in using GraphQL and React for your full-stack development requirements. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will learn how to do client-side routing in a React application. We will cover the installation of React Router, implement routes, create user profiles with GraphQL backend, and handle manual navigation. Installing React Router We will first start by installing and configuring React Router 4 by running npm: npm install --save react-router-dom From the package name, you might assume that this is not the main package for React. The reason for this is that React Router is a multi-package library. That comes in handy when using the same tool for multiple platforms. The core package is called react-router. There are two further packages. The first one is the react-router-dom package, which we installed in the preceding code, and the second one is the react-router-native package. If at some point, you plan to build a React Native app, you can use the same routing, instead of using the browser's DOM for a real mobile app. The first step that we will take introduces a simple router to get our current application working, including different paths for all of the screens. There is one thing that we have to prepare before continuing. For development, we are using the webpack development server. To get the routing working out of the box, we will add two parameters to the webpack.client.config.js file. The devServer field should look as follows: devServer: { port: 3000, open: true, historyApiFallback: true, }, The historyApiFallback field tells the devServer to serve the index.html file, not only for the root path, http://localhost:3000/ but also when it would typically receive a 404 error. That happens when the path does not match a file or folder that is normal when implementing routing. The output field at the top of the config file must have a publicPath property, as follows: output: { path: path.join(__dirname, buildDirectory), filename: 'bundle.js', publicPath: '/', }, The publicPath property tells webpack to prefix the bundle URL to an absolute path, instead of a relative path. When this property is not included, the browser cannot download the bundle when visiting the sub-directories of our application, as we are implementing client-side routing. Implementing your first route Before implementing the routing, we will clean up the App.js file. Create a Main.js file next to the App.js file in the client folder. Insert the following code: import React, { Component } from 'react'; import Feed from './Feed'; import Chats from './Chats'; import Bar from './components/bar'; import CurrentUserQuery from './components/queries/currentUser'; export default class Main extends Component { render() { return ( <CurrentUserQuery> <Bar changeLoginState={this.props.changeLoginState}/> <Feed /> <Chats /> </CurrentUserQuery> ); }} As you might have noticed, the preceding code is pretty much the same as the logged in condition inside the App.js file. The only change is that the changeLoginState function is taken from the properties, and is not directly a method of the component itself. That is because we split this part out of the App.js and put it into a separate file. This improves reusability for other components that we are going to implement. Now, open and replace the render method of the App component to reflect those changes, as follows: render() { return ( <div> <Helmet> <title>Graphbook - Feed</title> <meta name="description" content="Newsfeed of all your friends on Graphbook" /> </Helmet> <Router loggedIn={this.state.loggedIn} changeLoginState= {this.changeLoginState}/> </div> ) } If you compare the preceding method with the old one, you can see that we have inserted a Router component, instead of directly rendering either the posts feed or the login form. The original components of the App.js file are now in the previously created Main.js file. Here, we pass the loggedIn state variable and the changeLoginState function to the Router component. Remove the dependencies at the top, such as the Chats and Feed components, because we won't use them any more thanks to the new Main component. Add the following line to the dependencies of our App.js file: import Router from './router'; To get this code working, we have to implement our custom Router component first. Generally, it is easy to get the routing running with React Router, and you are not required to separate the routing functionality into a separate file, but, that makes it more readable. To do this, create a new router.js file in the client folder, next to the App.js file, with the following content: import React, { Component } from 'react'; import LoginRegisterForm from './components/loginregister'; import Main from './Main'; import { BrowserRouter as Router, Route, Redirect, Switch } from 'react-router-dom'; export default class Routing extends Component { render() { return ( <Router> <Switch> <Route path="/app" component={() => <Main changeLoginState= {this.props.changeLoginState}/>}/> </Switch> </Router> ) }} At the top, we import all of the dependencies. They include the new Main component and the react-router package. The problem with the preceding code is that we are only listening for one route, which is /app. If you are not logged in, there will be many errors that are not covered. The best thing to do would be to redirect the user to the root path, where they can log in. Advanced routing with React Router The primary goal of this article is to build a profile page, similar to Facebook, for your users. We need a separate page to show all of the content that a single user has entered or created. Parameters in routes We have prepared most of the work required to add a new user route. Open up the router.js file again. Add the new route, as follows: <PrivateRoute path="/user/:username" component={props => <User {...props} changeLoginState={this.props.changeLoginState}/>} loggedIn={this.props.loggedIn}/> Those are all of the changes that we need to accept parameterized paths in React Router. We read out the value inside of the new user page component. Before implementing it, we import the dependency at the top of router.js to get the preceding route working: import User from './User'; Create the preceding User.js file next to the Main.js file. Like the Main component, we are collecting all of the components that we render on this page. You should stay with this layout, as you can directly see which main parts each page consists of. The User.js file should look as follows: import React, { Component } from 'react'; import UserProfile from './components/user'; import Chats from './Chats'; import Bar from './components/bar'; import CurrentUserQuery from './components/queries/currentUser'; export default class User extends Component { render() { return ( <CurrentUserQuery> <Bar changeLoginState={this.props.changeLoginState}/> <UserProfile username={this.props.match.params.username}/> <Chats /> </CurrentUserQuery> ); }} We use the CurrentUserQuery component as a wrapper for the Bar component and the Chats component. If a user visits the profile of a friend, they see the common application bar at the top. They can access their chats on the right-hand side, like in Facebook. We removed the Feed component and replaced it with a new UserProfile component. Importantly, the UserProfile receives the username property. Its value is taken from the properties of the User component. These properties were passed over by React Router. If you have a parameter, such as a username, in the routing path, the value is stored in the match.params.username property of the child component. The match object generally contains all matching information of React Router. From this point on, you can implement any custom logic that you want with this value. We will now continue with implementing the profile page. Follow these steps to build the user's profile page: Create a new folder, called user, inside the components folder. Create a new file, called index.js, inside the user folder. Import the dependencies at the top of the file, as follows: import React, { Component } from 'react'; import PostsQuery from '../queries/postsFeed'; import FeedList from '../post/feedlist'; import UserHeader from './header'; import UserQuery from '../queries/userQuery'; The first three lines should look familiar. The last two imported files, however, do not exist at the moment, but we are going to change that shortly. The first new file is UserHeader, which takes care of rendering the avatar image, the name, and information about the user. Logically, we request the data that we will display in this header through a new Apollo query, called UserQuery. Insert the code for the UserProfile component that we are building at the moment beneath the dependencies, as follows: export default class UserProfile extends Component { render() { const query_variables = { page: 0, limit: 10, username: this.props.username }; return ( <div className="user"> <div className="inner"> <UserQuery variables={{username: this.props.username}}> <UserHeader/> </UserQuery> </div> <div className="container"> <PostsQuery variables={query_variables}> <FeedList/> </PostsQuery> </div> </div> ) } } The UserProfile class is not complex. We are running two Apollo queries simultaneously. Both have the variables property set. The PostQuery receives the regular pagination fields, page and limit, but also the username, which initially came from React Router. This property is also handed over to the UserQuery, inside of a variables object. We should now edit and create the Apollo queries, before programming the profile header component. Open the postsFeed.js file from the queries folder. To use the username as input to the GraphQL query we first have to change the query string from the GET_POSTS variable. Change the first two lines to match the following code: query postsFeed($page: Int, $limit: Int, $username: String) { postsFeed(page: $page, limit: $limit, username: $username) { Add a new line to the getVariables method, above the return statement: if(typeof variables.username !== typeof undefined) { query_variables.username = variables.username; } If the custom query component receives a username property, it is included in the GraphQL request. It is used to filter posts by the specific user that we are viewing. Create a new userQuery.js file in the queries folder to create the missing query class. Import all of the dependencies and parse the new query schema with graphl-tag, as follows: import React, { Component } from 'react'; import { Query } from 'react-apollo'; import Loading from '../loading'; import Error from '../error'; import gql from 'graphql-tag'; const GET_USER = gql` query user($username: String!) { user(username: $username) { id email username avatar } }`; The preceding query is nearly the same as the currentUser query. We are going to implement the corresponding user query later, in our GraphQL API. The component itself is as simple as the ones that we created before. Insert the following code: export default class UserQuery extends Component { getVariables() { const { variables } = this.props; var query_variables = {}; if(typeof variables.username !== typeof undefined) { query_variables.username = variables.username; } return query_variables; } render() { const { children } = this.props; const variables = this.getVariables(); return( <Query query={GET_USER} variables={variables}> {({ loading, error, data }) => { if (loading) return <Loading />; if (error) return <Error><p>{error.message}</p></Error>; const { user } = data; return React.Children.map(children, function(child){ return React.cloneElement(child, { user }); }) }} </Query> ) } } We set the query property and the parameters that are collected by the getVariables method to the GraphQL Query component. The rest is the same as any other query component that we have written. All child components receive a new property, called user, which holds all the information about the user, such as their name, their email, and their avatar image. The last step is to implement the UserProfileHeader component. This component renders the user property, with all its values. It is just simple HTML markup. Copy the following code into the header.js file, in the user folder: import React, { Component } from 'react';export default class UserProfileHeader extends Component { render() { const { avatar, email, username } = this.props.user; return ( <div className="profileHeader"> <div className="avatar"> <img src={avatar}/> </div> <div className="information"> <p> {username} </p> <p> {email} </p> <p>You can provide further information here and build your really personal header component for your users.</p> </div> </div> ) }} We have finished the new front end components, but the UserProfile component is still not working. The queries that we are using here either do not accept the username parameter or have not yet been implemented. Querying the user profile With the new profile page, we have to update our back end accordingly. Let's take a look at what needs to be done, as follows: We have to add the username parameter to the schema of the postsFeed query and adjust the resolver function. We have to create the schema and the resolver function for the new UserQuery component. We will begin with the postsFeed query. Edit the postsFeed query in the RootQuery type of the schema.js file to match the following code: postsFeed(page: Int, limit: Int, username: String): PostFeed @auth Here, I have added the username as an optional parameter. Now, head over to the resolvers.js file, and take a look at the corresponding resolver function. Replace the signature of the function to extract the username from the variables, as follows: postsFeed(root, { page, limit, username }, context) { To make use of the new parameter, add the following lines of code above the return statement: if(typeof username !== typeof undefined) { query.include = [{model: User}]; query.where = { '$User.username$': username }; } In the preceding code, we fill the include field of the query object with the Sequelize model that we want to join. This allows us to filter the associated Chats model in the next step. Then, we create a normal where object, in which we write the filter condition. If you want to filter the posts by an associated table of users, you can wrap the model and field names that you want to filter by with dollar signs. In our case, we wrap User.username with dollar signs, which tells Sequelize to query the User model's table and filter by the value of the username column. No adjustments are required for the pagination part. The GraphQL query is now ready. The great thing about the small changes that we have made is that we have just one API function that accepts several parameters, either to display posts on a single user profile, or to display a list of posts like a news feed. Let's move on and implement the new user query. Add the following line to the RootQuery in your GraphQL schema: user(username: String!): User @auth This query only accepts a username, but this time it is a required parameter in the new query. Otherwise, the query would make no sense, since we only use it when visiting a user's profile through their username. In the resolvers.js file, we will now implement the resolver function using Sequelize: user(root, { username }, context) { return User.findOne({ where: { username: username } }); }, In the preceding code, we use the findOne method of the User model by Sequelize, and search for exactly one user with the username that we provided in the parameter. We also want to display the email of the user on the user's profile page. Add the email as a valid field on the User type in your GraphQL schema with the following line of code: email: String With this step, our back end code and the user page are ready. This article walked you through the installation process of React Router and how to implement a route in React. Then we moved on to more advanced stuff by implementing a user profile, similar to Facebook, with a GraphQL backend. If you found this post useful, do check out the book, Hands-on Full-Stack Web Development with GraphQL and React. This book teaches you how to build scalable full-stack applications while learning to solve complex problems with GraphQL. How to build a Relay React App [Tutorial] React vs. Vue: JavaScript framework wars Working with the Vue-router plugin for SPAs
Read more
  • 0
  • 0
  • 7825
article-image-applying-styles-to-material-ui-components-in-react-tutorial
Bhagyashree R
18 May 2019
14 min read
Save for later

Applying styles to Material-UI components in React [Tutorial]

Bhagyashree R
18 May 2019
14 min read
The majority of styles that are applied to Material-UI components are part of the theme styles. In some cases, you need the ability to style individual components without changing the theme. For example, a button in one feature might need a specific style applied to it that shouldn't change every other button in the app. Material-UI provides several ways to apply custom styles to components as a whole, or to specific parts of components. This article is taken from the book React Material-UI Cookbook by Adam Boduch by Adam Boduch.  This book will serve as your ultimate guide to building compelling user interfaces with React and Material Design. Filled with practical and to-the-point recipes, you will learn how to implement sophisticated-UI components. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will look at the various styling solutions to design appealing user interfaces including basic component styles, scoped component styles, extending component styles, moving styles to themes, and others. Basic component styles Material uses JavaScript Style Sheets (JSS) to style its components. You can apply your own JSS using the utilities provided by Material-UI. How to do it... The withStyles() function is a higher-order function that takes a style object as an argument. The function that it returns takes the component to style as an argument. Here's an example: import React, { useState } from 'react';import { withStyles } from '@material-ui/core/styles';import Card from '@material-ui/core/Card';import CardActions from '@material-ui/core/CardActions';import CardContent from '@material-ui/core/CardContent';import Button from '@material-ui/core/Button';import Typography from '@material-ui/core/Typography';const styles = theme => ({card: {width: 135,height: 135,textAlign: 'center'},cardActions: {justifyContent: 'center'}});const BasicComponentStyles = withStyles(styles)(({ classes }) => {const [count, setCount] = useState(0);const onIncrement = () => {setCount(count + 1);};return (<Card className={classes.card}><CardContent><Typography variant="h2">{count}</Typography></CardContent><CardActions className={classes.cardActions}><Button size="small" onClick={onIncrement}>Increment</Button></CardActions></Card>);});export default BasicComponentStyles; Here's what this component looks like: How it works... Let's take a closer look at the styles defined by this example: const styles = theme => ({ card: { width: 135, height: 135, textAlign: 'center' }, cardActions: { justifyContent: 'center' } }); The styles that you pass to withStyles() can be either a plain object or a function that returns a plain object, as is the case with this example. The benefit of using a function is that the theme values are passed to the function as an argument, in case your styles need access to the theme values. There are two styles defined in this example: card and cardActions. You can think of these as Cascading Style Sheets (CSS) classes. Here's what these two styles would look like as CSS: .card { width: 135 height: 135 text-align: center}.cardActions {justify-content: center} By calling withStyles(styles)(MyComponent), you're returning a new component that has a classes property. This object has all of the classes that you can apply to components now. You can't just do something such as this: <Card className="card" /> When you define your styles, they have their own build process and every class ends up getting its own generated name. This generated name is what you'll find in the classes object, so this is why you would want to use it. There's more... Instead of working with higher-order functions that return new components, you can leverage Material-UI style hooks. This example already relies on the useState() hook from React, so using another hook in the component feels like a natural extension of the same pattern that is already in place. Here's what the example looks like when refactored to take advantage of the makeStyles() function: import React, { useState } from 'react';import { makeStyles } from '@material-ui/styles';import Card from '@material-ui/core/Card';import CardActions from '@material-ui/core/CardActions';import CardContent from '@material-ui/core/CardContent';import Button from '@material-ui/core/Button';import Typography from '@material-ui/core/Typography';const useStyles = makeStyles(theme => ({card: {width: 135,height: 135,textAlign: 'center'},cardActions: {justifyContent: 'center'}}));export default function BasicComponentStyles() {const classes = useStyles();const [count, setCount] = useState(0);const onIncrement = () => {setCount(count + 1);};return (<Card className={classes.card}><CardContent><Typography variant="h2">{count}</Typography></CardContent><CardActions className={classes.cardActions}><Button size="small" onClick={onIncrement}>Increment</Button></CardActions></Card>);}The useStyles() hook is built using the makeStyles() function—which takes the exact same styles argument as withStyles(). By calling useStyles() within the component, you have your classes object. Another important thing to point out is that makeStyles is imported from @material-ui/styles, not @material-ui/core/styles. Scoped component styles Most Material-UI components have a CSS API that is specific to the component. This means that instead of having to assign a class name to the className property for every component that you need to customize, you can target specific aspects of the component that you want to change. Material-UI has laid the foundation for scoping component styles; you just need to leverage the APIs. How to do it... Let's say that you have the following style customizations that you want to apply to the Button components used throughout your application: Every button needs a margin by default. Every button that uses the contained variant should have additional top and bottom padding. Every button that uses the contained variant and the primary color should have additional top and bottom padding, as well as additional left and right padding. Here's an example that shows how to use the Button CSS API to target these three different Button types with styles: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes: { root, contained, containedPrimary } }) => (<Fragment><Button classes={{ root }}>My Default Button</Button><Button classes={{ root, contained }} variant="contained">My Contained Button</Button><Buttonclasses={{ root, contained, containedPrimary }}variant="contained"color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; Here's what the three rendered buttons look like: How it works... The Button CSS API takes named styles and applies them to the component. These same names are used in the styles in this code. For example, root applies to every Button component, whereas contained only applies the styles to the Button components that use the contained variant and the containedPrimary style only applies to Button components that use the contained variant and the primary color. There's more... Each style is destructured from the classes property, then applied to the appropriate Button component. However, you don't actually need to do all of this work. Since the Material-UI CSS API takes care of applying styles to components in a way that matches what you're actually targeting, you can just pass the classes directly to the buttons and get the same result. Here's a simplified version of this example: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes }) => (<Fragment><Button classes={classes}>My Default Button</Button><Button classes={classes} variant="contained">My Contained Button</Button><Button classes={classes} variant="contained" color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; The output looks the same because only buttons that match the constraints of the CSS API get the styles applied to them. For example, the first Button has the root, contained, and containedPrimary styles passed to the classes property, but only root is applied because it isn't using the contained variant of the primary color. The second Button also has all three styles passed to it, but only root and contained are applied. The third Button has all three styles applied to it because it meets the criteria of each style. Extending component styles You can extend styles that you apply to one component with styles that you apply to another component. Since your styles are JavaScript objects, one option is to extend one style object with another. The only problem with this approach is that you end up with a lot of duplicate styles properties in the CSS output. A better alternative is to use the jss extend plugin. How to do it... Let's say that you want to render three buttons and share some of the styles among them. One approach is to extend generic styles with more specific styles using the jss extend plugin. Here's how to do it: import React, { Fragment } from 'react';import { JssProvider, jss } from 'react-jss';import {withStyles,createGenerateClassName} from '@material-ui/styles';import {createMuiTheme,MuiThemeProvider} from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {extend: 'root',paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {extend: 'contained',paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const App = ({ children }) => (<JssProviderjss={jss}generateClassName={createGenerateClassName()}><MuiThemeProvider theme={createMuiTheme()}>{children}</MuiThemeProvider></JssProvider>);const Buttons = withStyles(styles)(({ classes }) => (<Fragment><Button className={classes.root}>My Default Button</Button><Button className={classes.contained} variant="contained">My Contained Button</Button><ButtonclassName={classes.containedPrimary}variant="contained"color="primary">My Contained Primary Button</Button></Fragment>));const ExtendingComponentStyles = () => (<App><Buttons /></App>);export default ExtendingComponentStyles; Here's what the rendered buttons look like: How it works... The easiest way to use the jss extend plugin in your Material-UI application is to use the default JSS plugin presets, which includes jss extend. Material-UI has several JSS plugins installed by default, but jss extend isn't one of them. Let's take a look at the App component in this example to see how this JSS plugin is made available: const App = ({ children }) => ( <JssProvider jss={jss} generateClassName={createGenerateClassName()} > <MuiThemeProvider theme={createMuiTheme()}> {children} </MuiThemeProvider> </JssProvider> ); The JssProvider component is how JSS is enabled in Material-UI applications. Normally, you wouldn't have to interface with it directly, but this is necessary when adding a new JSS plugin. The jss property takes the JSS preset object that includes the jss extend plugin. The generateClassName property takes a function from Material-UI that helps generate class names that are specific to Material-UI. Next, let's take a closer look at some styles: const styles = theme => ({ root: { margin: theme.spacing(2) }, contained: { extend: 'root', paddingTop: theme.spacing(2), paddingBottom: theme.spacing(2) }, containedPrimary: { extend: 'contained', paddingLeft: theme.spacing(4), paddingRight: theme.spacing(4) } }); The extend property takes the name of a style that you want to extend. In this case, the contained style extends root. The containedPrimary extends contained and root. Now let's take a look at how this translates into CSS. Here's what the root style looks like: .Component-root-1 { margin: 16px; } Next, here's the contained style: .Component-contained-2 { margin: 16px; padding-top: 16px; padding-bottom: 16px; } Finally, here's the containedPrimary style: .Component-containedPrimary-3 { margin: 16px; padding-top: 16px; padding-left: 32px; padding-right: 32px; padding-bottom: 16px; } Note that the properties from the more-generic properties are included in the more-specific styles. There are some properties duplicated, but this is in CSS, instead of having to duplicate JavaScript object properties. Furthermore, you could put these extended styles in a more central location in your code base, so that multiple components could use them. Moving styles to themes As you develop your Material-UI application, you'll start to notice style patterns that repeat themselves. In particular, styles that apply to one type of component, such as buttons, evolve into a theme. How to do it... Let's revisit the example from the Scoped component styles section: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes }) => (<Fragment><Button classes={classes}>My Default Button</Button><Button classes={classes} variant="contained">My Contained Button</Button><Button classes={classes} variant="contained" color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; Here's what these buttons look like after they have these styles applied to them: Now, let's say you've implemented these same styles in several places throughout your app because this is how you want your buttons to look. At this point, you've evolved a simple component customization into a theme. When this happens, you shouldn't have to keep implementing the same styles over and over again. Instead, the styles should be applied automatically by using the correct component and the correct property values. Let's move these styles into theme: import React from 'react';import {createMuiTheme,MuiThemeProvider} from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const defaultTheme = createMuiTheme();const theme = createMuiTheme({overrides: {MuiButton: {root: {margin: 16},contained: {paddingTop: defaultTheme.spacing(2),paddingBottom: defaultTheme.spacing(2)},containedPrimary: {paddingLeft: defaultTheme.spacing(4),paddingRight: defaultTheme.spacing(4)}}}});const MovingStylesToThemes = ({ classes }) => (<MuiThemeProvider theme={theme}><Button>My Default Button</Button><Button variant="contained">My Contained Button</Button><Button variant="contained" color="primary">My Contained Primary Button</Button></MuiThemeProvider>);export default MovingStylesToThemes; Now, you can use Button components without having to apply the same styles every time. How it works... Let's take a closer look at how your styles fit into a Material-UI theme: overrides: { MuiButton: { root: { margin: 16 }, contained: { paddingTop: defaultTheme.spacing(2), paddingBottom: defaultTheme.spacing(2) }, containedPrimary: { paddingLeft: defaultTheme.spacing(4), paddingRight: defaultTheme.spacing(4) } } } The overrides property is an object that allows you to override the component-specific properties of the theme. In this case, it's the MuiButton component styles that you want to override. Within MuiButton, you have the same CSS API that is used to target specific aspects of components. This makes moving your styles into the theme straightforward because there isn't much to change. One thing that did have to change in this example is the way spacing works. In normal styles that are applied via withStyles(), you have access to the current theme because it's passed in as an argument. You still need access to the spacing data, but there's no theme argument because you're not in a function. Since you're just extending the default theme, you can access it by calling createMuiTheme() without any arguments, as this example shows. This article explored some of the ways you can apply styles to Material-UI components of your React applications. There are many other styling options available to your Material-UI app beyond withStyles(). There's the styled() higher-order component function that emulates styled components. You can also jump outside the Material-UI style system and use inline CSS styles or import CSS modules and apply those styles. If you found this post useful, do check out the book, React Material-UI Cookbook by Adam Boduch.  This book will help you build modern-day applications by implementing Material Design principles in React applications using Material-UI. Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial] Building a Progressive Web Application with Create React App 2 [Tutorial]
Read more
  • 0
  • 0
  • 22952

article-image-bryan-cantrill-on-the-changing-ethical-dilemmas-in-software-engineering
Vincy Davis
17 May 2019
6 min read
Save for later

Bryan Cantrill on the changing ethical dilemmas in Software Engineering

Vincy Davis
17 May 2019
6 min read
Earlier this month at the Craft Conference in Budapest, Bryan Cantrill (Chief Technology Officer at Joyent) gave a talk on “Andreessen's Corollary: Ethical Dilemmas in Software Engineering”. In 2011, Marc Andreessen had penned an essay ‘Why Software Is Eating The World’ in The Wall Street Journal. In the article, he’s talking about how software is present in all fields and are poised to take over large swathes of the economy. He believed, way back in 2011, that “many of the prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses.” Eight years later, Bryan Cantrill believes this prophecy is clearly coming to fulfillment. According to the article ‘Software engineering code of ethics’ published in 1997 by ACM (Association for Computing Machinery), a code is not a simple ethical program that generates ethical judgements. In some situations, these codes can generate conflict with each other. This will require a software engineer to use ethical judgement that will be consistent in terms of ethics. The article provides certain principles for software engineers to follow. According to Bryan, these principles are difficult to follow. Some of the principles expect software engineers to ensure that their product on which they are working is useful and of acceptable quality to the public, the employer, the client and user, is completed on time and of reasonable cost & free of errors. The codes specifications should be well documented according to the user’s requirements and have the client’s approval.  The codes should have appropriate methodology and good management. Software engineers should ensure realistic estimate of cost, scheduling, and outcome of any project on which they work or propose to work. The guiding context surrounding the code of ethics remains timeless, but as time has changed, these principles have become old fashioned. With the immense use of software and industries implementing these codes, it’s difficult for software engineers to follow these old principles and be ethically sound. Bryan calls this era as an ‘ethical grey area’ for software engineers. The software's contact with our broader world, has brought with it novel ethical dilemmas for those who endeavor to build it.  More than ever, software engineers are likely to find themselves in new frontiers with respect to society, the law or their own moral compass. Often without any formal training or even acknowledgement with respect to the ethical dimensions of their work, software engineers have to make ethical judgments. Ethical dilemmas in software development since Andreessen’s prophecy 2012 : Facebook started using emotional manipulation by beginning to perform experiments in the name of dynamic research or to generate revenue. The posts were determined to be positive or negative. 2013 : ‘Zenefits’ is a Silicon Valley startup. In order to build a software, they had to be certified by the  state of California. For which, they had to sit through 52 hours of training studying the materials through the web browser. The manager created a hack called ‘Macro’ that made it possible to complete the pre-licensing education requirement in less than 52 hours. This was passed on to almost 100 Zenefit employees to automate the process for them too. 2014 : Uber illegally entered the Portland market, with a software called ‘Greyball’. This software was used by uber to intentionally evade Portland Bureau of Transportation (PBOT) officers and deny separate ride requests. 2015 : Google started to mislabel photo captions. One of the times, Google mistakenly identified a dark skinned individual as a ‘Gorilla’. They offered a prompt reaction and removed the photo. This highlighted a real negative point of Artificial Intelligence (AI), that AI relies on biased human classification, at times using repeated patterns. Google was facing the problem of defending this mistake, as Google had not intentionally misled its network with such wrong data. 2016 : The first Tesla ‘Autopilot’ car was launched. It had traffic avoiding cruise control and steering assists features but was sold and marketed as a autopilot car. In an accident, the driver was killed, maybe because he believed that the car will drive itself. This was a serious problem. Tesla was using two cameras to judge the movements while driving. It should be understood that this Tesla car was just an enhancement to the driver and not a replacement. 2017 : Facebook faced the ire of the anti- Rohingya violence in Myanmar. Facebook messages were used to coordinate a effective genocide against the Rohingya, a mostly Muslim minority community, where 75000 people died. Facebook did not enable it or advocate it. It was a merely a communication platform, used for a wrong purpose. But Facebook could have helped to reduce the gravity of the situation by acting promptly and not allowing such messages to be circulated. This shows how everything should not be automated and human judgement cannot be replaced anytime soon. 2018 : In the wake of Pittsburg shooting, the alleged shooter had used the Gab platform to post against the Jews. Gab, which bills itself as "the free speech social network," is small compared to mainstream social media platforms but it has an avid user base. Joyent provided infrastructure to Gab, but quickly removed them from their platform, after the horrific incident. 2019 : After the 737 MAX MCAS AND JT/610 /ET302 crashes, reports emerged that aircraft's MCAS system played a role in the crash. The crash happened because a faulty sensor erroneously reported that the airplane was stalling. The false report triggered an automated system known as the Maneuvering Characteristics Augmentation System (MCAS). MCAS is activated without the pilot’s input. The crew confirmed that the manual trim operation was not working. These are some of the examples of ethical dilemmas in the post-Andreessen’s prophecy. As seen, all the incidents were the result of ethical decisions gone wrong. It is clear that ‘what is right for software is not necessarily right for society.’ How to deal with these ethical dilemmas? In the summer of 2018, the ACM came up with a new code of Ethics: Contribute to society and human well being Avoid harm Be honest and trustworthy It has also included an Integrity project which will have case studies and  “Ask an Ethicist” feature. These efforts by ACM will help software engineers facing ethical dilemmas. This will also pave way for great discussions resulting in a behavior consistent with the code of ethics. Organisations should encourage such discussions. This will help like minded people to perpetuate a culture of consideration of ethical consequences. As software’s footprint continues to grow, the ethical dilemmas of software engineers will only expand. These Ethical dilemmas are Andreessen’s corollary. And software engineers must address them collectively and directly. Software engineers agree with this evolving nature of ethical dilemmas. https://twitter.com/MA_Hanin/status/1129082836512911360 Watch the talk by Bryan Cantrill at Craft Conference. All coding and no sleep makes Jack/Jill a dull developer, research confirms Red Badger Tech Director Viktor Charypar talks monorepos, lifelong learning, and the challenges facing open source software Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model
Read more
  • 0
  • 0
  • 7023