Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Server-Side Web Development

85 Articles
article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 17400

article-image-django-is-revamping-its-governance-model-plans-to-dissolve-django-core-team
Bhagyashree R
21 Nov 2018
4 min read
Save for later

Django is revamping its governance model, plans to dissolve Django Core team

Bhagyashree R
21 Nov 2018
4 min read
Yesterday, James Bennett, a software developer and an active contributor to the Django web framework issued the summary of a proposal on dissolving the Django Core team and revoking commit bits. Re-forming or reorganizing the Django core team has been a topic of discussion from the last couple of years, and this proposal aims to take this discussion to real action. What are the reasons behind the proposal of dissolving the Django Core team? Unable to bring in new contributors Django, the open source project has been facing some difficulty in recruiting and retaining contributors to keep the project alive. Typically, open source projects avoid this situation by having corporate sponsorship of contributions. Companies which rely on the software also have employees who are responsible to maintain it. This was true in the case of Django as well but it hasn’t really worked out as a long-term plan. As compared to the growth of this web framework, it has hardly been able to draw contributors from across its entire user base. The project has not been able to bring new committers at a sufficient rate to replace those who have become less active or even completely inactive. This essentially means that Django is dependent on the goodwill of the contributors who mostly don’t get paid to work on it and are very few in number. This poses a risk on the future of the Django web framework. Django Committer is seen as a high-prestige title Currently, the decisions are made by consensus, involving input from committers and non-committers on the django-developers list and the commits to the main Django repository are made by the Django Fellows. Even people who have commit bits of their own, and therefore have the right to just push their changes straight into Django, typically use pull requests and start a discussion. The actual governance rarely relies on the committers, but still, Django committer is seen as a high-prestige title, and committers are given a lot of respect by the wider community. This creates an impression among potential contributors that they’re not “good enough” to match up to those “awe-inspiring titanic beings”. What is this proposal about? Given the reasons above, this proposal is being made to dissolve the Django core team and also revoke the commit bits. Instead, this proposal will introduce two roles called Mergers and Releasers. Mergers would merge pull requests into Django and Releasers would package/publish releases. Rather than being all-powered decision-makers, these would be bureaucratic roles. The current set of Fellows will act as the initial set of Mergers, and something similar will happen for Releasers. As opposed to allowing the committers making decisions, governance would take place entirely in public, on the django-developers mailing list. But as a final tie-breaker, the technical board would be retained and would get some extra decision-making power. These powers will be mostly related to the selection of the Merger/Releaser roles and confirming that new versions of Django are ready for release. The technical board will be elected very less often than it currently is and the voting would also be open to public. The Django Software Foundation (DSF) will act as a neutral  administrator of the technical board elections. What are the goals this proposal aims to achieve? Mr. Bennett believes that eliminating the distinction between the committers and the “ordinary contributors” will open doors for more contributors: “Removing the distinction between godlike “committers” and plebeian ordinary contributors will, I hope, help to make the project feel more open to contributions from anyone, especially by making the act of committing code to Django into a bureaucratic task, and making all voices equal on the django-developers mailing list.” The technical board remains as a backstop for resolving dead-locked decisions. This proposal will provide additional authority to the board such as issuing the final go-ahead on releases. Retaining the technical board will ensure that Django is not going to descend into some sort of “chaotic mob rule”. Also, with this proposal the formal description of Django’s governance becomes much more in line with the reality of how the project actually works and has worked for the past several years. To know more in detail, read the post by James Bannett: Django Core no more. Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users Django 2.1 released with new model view permission and more Getting started with Django and Django REST frameworks to build a RESTful app
Read more
  • 0
  • 0
  • 2614

article-image-introducing-cycle-js-a-functional-and-reactive-javascript-framework
Bhagyashree R
19 Nov 2018
3 min read
Save for later

Introducing Cycle.js, a functional and reactive JavaScript framework

Bhagyashree R
19 Nov 2018
3 min read
Cycle.js is a functional and reactive JavaScript framework for writing predictable code. The apps built with Cycle.js consist of pure functions, which means it only takes inputs and generates predictable outputs, without performing any I/O effects. What is the basic concept behind Cycle.js? Cycle.js considers your application as a pure main() function. It takes inputs that are read effects (sources) from the external world and gives outputs that are write effects (sinks) to affect the external world. Drivers like plugins that handle DOM effects, HTTP effects, etc are responsible for managing these I/O effects in the external world. Source: Cycle.js The main() is built using Reactive Programming primitives that maximize separation of concerns and provides a fully declarative way of organizing your code. The dataflow in your app is clearly visible in the code, making it readable and traceable. Here are some of its properties: Functional and reactive As Cycle.js is functional and reactive, it allows developers to write predictable and separated code. Its building blocks are reactive streams from libraries like RxJS, xstream or Most.js. These greatly simplify code related to events, asynchrony, and errors. This application structure also separates concerns as all dynamic updates to a piece of data are co-located and impossible to change from outside. Simple and concise This framework is very easy to learn and get started with as it has very few concepts. Its core API has just one function, run(app, drivers). Apart from that, we have streams, functions, drivers, and a helper function to isolate scoped components. Its most of the building blocks are just JavaScript functions. Functional reactive streams are able to build complex dataflows with very few operations, which makes apps in Cycle.js very small and readable. Extensible and testable In Cycle.js, drivers are simple functions that take messages from sinks and call imperative functions. All I/O effects are done by the drivers, which means your application is just a pure function. This makes it very easy to swap the drivers around. Currently, there are drivers for React Native, HTML5 Notification, Socket.io, and so on. Also, with Cycle.js, testing is just a matter of feeding inputs and inspecting the output. Composable As mentioned earlier, a Cycle.js app, no matter how complex it is, is a function that can be reused in a larger Cycle.js app. Sources and sinks in these apps act as interfaces between the application and the drivers, but they are also the interface between a child component and its parent. Its components are not just GUI widgets like in other frameworks. You can make Web Audio components, network requests components, and others since the sources/sinks interface is not exclusive to the DOM. You can read more about Cycle.js on its official website. Introducing Howler.js, a Javascript audio library with full cross-browser support npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 4607
Banner background image

article-image-node-v11-2-0-released-with-major-updates-in-timers-windows-http-parser-and-more
Amrata Joshi
16 Nov 2018
2 min read
Save for later

Node v11.2.0 released with major updates in timers, windows, HTTP parser and more

Amrata Joshi
16 Nov 2018
2 min read
Yesterday, the Node.js community released Node v11.2.0. This new version comes with a new experimental HTTP parser (llhttp), timers, windows and more. Node v11.1.0 was released earlier this month. Major updates Node v11.2.0 comes with a major update in timers, fixing an issue that could cause setTimeout to stop working as expected. If the node.pdb file is available, a crashing process will now show the names of stack frames This version improves the installer's new stage that installs native build tools. Node v11.2.0 adds prompt to tools installation script which gives a visible warning and a prompt that lessens the probability of users skipping ahead without reading. On Windows, the windowsHide option has been set to false. This will let the detached child processes and GUI apps to start in a new window. This version also introduced an experimental `llhttp` HTTP parser. llhttp is written in human-readable TypeScript. It is verifiable and easy to maintain. This llparser is used to generate the output C and/or bitcode artifacts, which can be compiled and linked with the embedder's program (like Node.js). The eventEmitter.emit() method has been added to v11.2.0. This method allows an arbitrary set of arguments to be passed to the listener functions. Improvements in Cluster The cluster module allows easy creation of child processes for sharing server ports. The cluster module now supports two methods of distributing incoming connections. The first one is the round robin approach which is default on all platforms except Windows. The master process listens on a port, they accept new connections and distribute them across the workers in a round-robin fashion. This approach avoids overloading a worker process. In the second process, the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly. Theoretically, the second approach gives the best performance. Read more about this release on the official page of Node.js. Node.js v10.12.0 (Current) released Node.js and JS Foundation announce intent to merge; developers have mixed feelings low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 2909

article-image-basecamp-3-faces-a-read-only-outage-of-nearly-5-hours
Bhagyashree R
13 Nov 2018
3 min read
Save for later

Basecamp 3 faces a read-only outage of nearly 5 hours

Bhagyashree R
13 Nov 2018
3 min read
Yesterday, Basecamp shared the cause behind the outage Basecamp 3 faced on November 8. The outage continued for nearly five hours starting from 7:21 am CST to 12:11 pm. Due to this, the users were only able to access existing messages, to-do lists, and files, but they were prevented from entering any new information and altering any existing information. David Heinemeier Hansson, the creator of Ruby on Rails, founder & CTO at Basecamp said in his post that this was the worst outage Basecamp has faced in probably 10 years: “It’s bad enough that we had the worst outage at Basecamp in probably 10 years, but to know that it was avoidable is hard to swallow. And I cannot express my apologies clearly or deeply enough.” https://twitter.com/basecamp/status/1060554610241224705 Key causes behind the Basecamp 3 outage Every activity that a user does is tracked in Basecamp’s events table, whether it is posting a message, updating a to-do list, or applauding a comment. The root cause behind the Basecamp going into read-only mode was its database hitting the ceiling of 2,147,483,647 on this very busy events table. Secondly, the programming framework that Basecamp uses, Ruby on Rails updated their default for database tables in version 5.1 released in 2017. This update lifted the headroom for records from 2,147,483,647 to 9,223,372,036,854,775,807 on all tables. But, the column in the database was configured as an integer rather than a big integer. The complete timeline of the outage Time Activity 7:21 am CST   They ran out of ID numbers on the events table in the database because the column in the database was configured as an integer rather than a big integer. The integer runs out of numbers at 2147483647 and big integer can grow until 9223372036854775807. 7:29 am CST The team started working on database migration where they updated the column type from the regular integer to the big integer type. They later tested this fix on a staging database to make sure it was safe. 7:52 am CST The test done on the staging database verified that the fix was correct, so they moved on to make the changes to the production database table. Due to the huge size of the production database, the migration was estimated to take about one hour and forty minutes. 10:56 am CST-11:52 am CST The upgrade to the database was completed, but still, verification of all the data, and configurations update was required to ensure no other problems are faced when it is back online. 12:22 pm CST After the successful verification, Basecamp came back online. 12:33 pm CST Basecamp went down again because of the intense load of the application was back online, which caused the caching server to get overwhelmed. 12:41 pm CST Basecamp came back online after they switched over to the backup caching servers. To read the entire update on Basecamp’s outage, check out David Heinemeier Hansson’s post on Medium. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Azure DevOps outage root cause analysis starring greedy threads and rogue scale units
Read more
  • 0
  • 0
  • 3629

article-image-http-over-quic-will-be-officially-renamed-to-http-3
Savia Lobo
12 Nov 2018
2 min read
Save for later

HTTP-over-QUIC will be officially renamed to HTTP/3

Savia Lobo
12 Nov 2018
2 min read
The protocol called HTTP-over-QUIC will be officially renamed to  HTTP/3. In a discussion on IETF mail archive thread, Mark Nottingham, Chairman of the IETF HTTPBIS Working Group and W3C Web Services Addressing Working Group, triggered the confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. QUIC, a TCP replacement done over UDP, was started as an effort by Google and then more of a "HTTP/2-encrypted-over-UDP" protocol. The QUIC Working Group in the IETF works on creating the QUIC transport protocol. According to Daniel Stenberg, lead developer of curl at Mozilla, “When the work took off in the IETF to standardize the protocol, it was split up in two layers: the transport and the HTTP parts. The idea being that this transport protocol can be used to transfer other data too and it’s not just done explicitly for HTTP or HTTP-like protocols. But the name was still QUIC.” People in the community have referred different versions of the protocol using informal names such as iQUIC and gQUIC to separate the QUIC protocols from IETF and Google. The protocol that sends HTTP over "iQUIC" was called "hq" (HTTP-over-QUIC) for a long time. Last week, on November 7, 2018, Dmitri Tikhonov, a programmer at Litespeed announced that his company and Facebook had successfully done the first interop ever between two HTTP/3 implementations. Here’s Mike Bihop's follow-up presentation at the HTTPbis session on the topic. https://www.youtube.com/watch?v=uVf_yyMfIPQ&feature=youtu.be&t=4956 Brute forcing HTTP applications and web applications using Nmap [Tutorial] Phoenix 1.4.0 is out with ‘Presence javascript API’, HTTP2 support, and more! Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 2944
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-security-issues-in-nginx-http-2-implementation-expose-nginx-servers-to-dos-attack
Bhagyashree R
12 Nov 2018
2 min read
Save for later

Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack

Bhagyashree R
12 Nov 2018
2 min read
Last week, two security issues were reported in nginx HTTP/2 implementation, which can result in excessive memory consumption and CPU usage. Along with these, an issue was found in ngx_http_mp4_module, which can be exploited by an attacker to cause a DoS attack. The issues in the HTTP/2 implementation happen if ngnix is compiled with the ngx_http_v2_module and the http2 option of the listen directive is used in a configuration file. To exploit these two issues, attackers can send specially crafted HTTP/2 requests that can lead to excessive CPU usage and memory usage, eventually triggering a DoS state. These issues affected nginx 1.9.5 - 1.15.5 and are now fixed in nginx 1.15.6, 1.14.1. In addition to these, a security issue was also identified in the ngx_http_mp4_module, which might allow an attacker to cause an infinite loop in a worker process. This can result in crashing the worker process or disclose its memory by using a specially crafted mp4 file. This issue only affects nginx if it is built with the ngx_http_mp4_module and the mp4 directive is used in the configuration file. The attack is only possible if an attacker is able to trigger processing of a specially crafted mp4 file with the ngx_http_mp4_module. This issue affects nginx 1.1.3+, 1.0.7+ and is now fixed in 1.15.6, 1.14.1. You can read more about these security issues in nginx at its official website. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team Introducing Howler.js, a Javascript audio library with full cross-browser support
Read more
  • 0
  • 0
  • 3552

article-image-facebooks-graphql-moved-to-a-new-graphql-foundation-backed-by-the-linux-foundation
Bhagyashree R
09 Nov 2018
3 min read
Save for later

Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation

Bhagyashree R
09 Nov 2018
3 min read
On Tuesday, The Linux Foundation announced that Facebook’s GraphQL project has been moved to a newly-established GraphQL Foundation, which will be hosted by the non-profit Linux Foundation. This foundation will be dedicated to enable widespread adoption and help accelerate the development of GraphQL and the surrounding ecosystem. GraphQL was developed by Facebook in 2012 and was later open-sourced in 2015. It has been adopted by many companies in production including Airbnb, Atlassian, Audi, CNBC, GitHub, Major League Soccer, Netflix, Shopify, The New York Times, Twitter, Pinterest, and Yelp. Why GraphhQL Foundation has been created? The foundation will provide a neutral home for the community to collaborate and encourage more participation and contribution. The community will be able to spread responsibilities and costs for infrastructure which will help in increasing the overall investment. This neutral governance will also ensure equal treatment in the community. The co-creator of GraphQL, Lee Byron said: “As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing. Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support.” The foundation will also provide more resources for the GraphQL community which will benefit all contributors. It will help in organizing events and working groups, formalizing governance structures, providing marketing support to the project, and handling IP and other legal issues as they arise. The Executive Director of The Linux Foundation, Jim Zemlin believes that this new foundation will ensure the long-term support for GraphQL: “We are thrilled to welcome the GraphQL Foundation into the Linux Foundation. This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.” In the next few months, The Linux Foundation with Facebook and the GraphQL community will be finalizing the founding members of the GraphQL Foundation. Read the full announcement on The Linux Foundation’s website and also check out the GraphQL Foundation’s website. Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right 7 reasons to choose GraphQL APIs over REST for building your APIs Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 2748

article-image-introducing-apollo-graphql-platform-for-product-engineering-teams-of-all-sizes-to-do-graphql-right
Bhagyashree R
08 Nov 2018
3 min read
Save for later

Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right

Bhagyashree R
08 Nov 2018
3 min read
Yesterday, Apollo introduced its Apollo GraphQL Platform for product engineering teams. It is built on Apollo's core open source GraphQL client and server and comes with additional open source devtools and cloud services. This platform is a combination of open source components, commercial extensions, and cloud services. The following diagram depicts its architecture: Source: Apollo GraphQL The Apollo GraphQL platform consists of the following components: Core open source components Apollo Server: It is a JavaScript GraphQL server used to define a schema and a set of resolvers that implement each part of that schema. It supports AWS Lambda and other serverless environments. Apollo Client: It is a GraphQL client that manages data and state in an application. It comes with integrations for React, React Native, Vue, Angular, and other view layers. iOS and Android clients: These clients allows to query a GraphQL API from native iOS and Android applications. Apollo CLI: It is a command line client that provides access to Apollo cloud services. Cloud services Schema registry: It is a central registry that acts as a central source of truth for a schema. It propagates all changes and details of your data,allowing multiple teams to collaborate with full visibility and security on a single data graph. Client registry: It is a registry that enables you to track each known consumer of a schema, which can include both pre-registered and ad-hoc clients. Operation registry: It is a registry of all the known operations against the schema, which similarly can include both pre-registered and ad-hoc operations. Trace warehouse: It is a data pipeline and storage layer that captures structured information about each GraphQL operation processed by an Apollo Server. Apollo Gateway GraphQL gateway is the commercial plugin for Apollo Server. It allows multiple teams to collaborate on a single, organization-wide schema without mixing everyone’s code together in a monolithic single point of failure. To do that, the gateway deploys “micro-schemas” that reference each other into a single master schema. This master schema then looks to a client just like any regular GraphQL schema. Workflows In addition to these components, Apollo also implements some useful workflows for managing a GraphQL API. Some of these workflows are: Schema change validation: It checks the compatibility of a given schema against a set of previously-observed operations using the trace warehouse, operation registry, and (typically) the client registry. Safelisting: Apollo provides an end-to-end mechanism for safelisting known clients and queries, a recommended best practice that limits production use of a GraphQL API to specific pre-arranged operations. To read the full announcement check out Apollo’s official announcement. Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’ 7 reasons to choose GraphQL APIs over REST for building your APIs Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives
Read more
  • 0
  • 0
  • 2903

article-image-redbird-a-modern-reverse-proxy-for-node
Amrata Joshi
06 Nov 2018
3 min read
Save for later

Redbird, a modern reverse proxy for node

Amrata Joshi
06 Nov 2018
3 min read
The latest version, 8.0 of Redbird got released last month. It is a modern reverse proxy for node. Redbird comes with built in Cluster, HTTP2, LetsEncrypt and Docker support which helps in the handling of load balancing, dynamic virtual hosts, proxying web sockets and SSL encryption. It comes with a complete library for building dynamic reverse proxies with the speed and robustness of http-proxy. It is a light-weight package that includes everything that is needed for easy reverse routing of applications. It is useful for routing applications from different domains in one single host. It is also used for easy handling of SSL. What’s new in Redbird? Support for HTTP2: One can now enable HTTP2 easily by setting the HTTP2 flag to true. Note: HTTP2 requires SSL/TLS certificates. Support for LetsEncrypt: Redbird now supports automatic generation of SSL certificates using LetsEncrypt. While using LetsEncrypt, the obtained certificates will be copied to the specific path on disk. One should take the backup, or save them. Features  It provides flexible and easy routing It also supports websockets The users can experience seamless SSL Support. It also, automatically redirects the user from HTTP to HTTPS It enables automatic TLS certificates generation and renewal It supports load balancing after following a round-robin algorithm It helps in registering and unregistering routes programmatically without restart which allows zero downtime deployments It helps in the automatic registration of running containers by enabling docker support. It enables automatic multi-process with the help of cluster support It is based on top of rock-solid node-http-proxy. It also offers optional logging which is based on bunyan It uses node-etcd to create proxy records automatically from an etcd cluster. Cluster Support in Redbird Redbird supports automatic generation of node cluster. To use the cluster support feature one needs to specify the number of processes that one wants it to use. Redbird automatically restarts any thread that crashes and hence increases reliability. If one needs NTLM support, Redbird adds the required header handler. This then registers a response handler. This handler makes sure that the NTLM auth header is properly split into two entries from http-proxy. Custom resolvers in Redbird Redbird comes with custom resolvers that helps one to decide how the proxy server handles the request. Custom resolvers help in path-based routing, headers based routing and wildcard domain routing. The install command for Redbird is npm install redbird. To read more about this news, check out the official page of Github. Squid Proxy Server: debugging problems How to Configure Squid Proxy Server Squid Proxy Server: Fine Tuning to Achieve Better Performance  
Read more
  • 0
  • 0
  • 4157
article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 5236

article-image-grpc-a-cncf-backed-javascript-library-and-an-alternative-to-the-rest-paradigm-is-now-generally-available
Bhagyashree R
25 Oct 2018
3 min read
Save for later

gRPC, a CNCF backed JavaScript library and an alternative to the REST paradigm, is now generally available

Bhagyashree R
25 Oct 2018
3 min read
Yesterday, the Cloud Native Computing Foundation (CNCF) announced the general availability of gRPC-Web, which means that it is stable enough for production use. It is a JavaScript client library that allows web apps to directly communicate with backend gRPC services, without the need for an intermediate HTTP server. This serves as an alternative to the REST paradigm of web development. What is gRPC? Source: gRPC Initially developed at Google, gRPC is an open source remote procedure call (RPC) framework that can run in any environment. gRPC allows a client application to directly call methods on a server application on a different machine as if it was a local object. gRPC is based on the idea of defining a service, specifying the methods that can be called remotely with their parameter and return types. To handle the client calls the server then implements this interface and runs a gRPC server. On the client side, the client has a stub that provides the same methods as the server. One of the advantages of using gRPC is that gRPC clients and servers can be written in any of the languages supported by gRPC. So, for instance, you can easily create a gRPC server in Java with clients in Go, Python, or Ruby. How gRPC-Web works? With gRPC-Web, you can define a service “contract” between client web applications and backend gRPC servers using .proto definitions and auto-generate client JavaScript. Here is how gRPC-Web works: Define the gRPC service: The first step is to define the gRPC service. Similar to other gRPC services, gRPC-Web uses protocol buffers to define its RPC methods and their message request and response types. Run the server and proxy: You need to have a gRPC server that implements the service interface and a gateway proxy that allows the client to connect to the server. Writing the JavaScript client: After the server and gateway are up and running, you can start making gRPC calls from the browser. What are the advantages of using gRPC-Web? Using gRPC-Web eliminates some of the tasks from the development process: Creating custom JSON serialization and deserialization logic Wrangling HTTP status codes Content type negotiation The following are its advantages: End-to-end gRPC gRPC-Web allows you to officially remove the REST component from your stack and replace it with pure gRPC. Replacing REST with gRPC will help in scenarios where a client request goes to an HTTP server, which interacts with five backend gRPC services. Tighter coordination between frontend and backend teams As the entire RPC pipeline is defined using Protocol Buffers, you no longer need to have your “microservices teams” alongside your “client team.” The interaction between the client and the backend is just one more gRPC layer amongst others. Generate client libraries easily With gRPC-Web, the server that interacts with the “outside” world is now a gRPC server instead of an HTTP server. This means that all of your service’s client libraries can be gRPC libraries. If you need client libraries for Ruby, Python, Java, and 4 other languages, you no longer have to write HTTP clients for all of them. You can read CNCF’s official announcement on its website. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2274

article-image-node-v11-0-0-released
Prasad Ramesh
24 Oct 2018
2 min read
Save for later

Node v11.0.0 released

Prasad Ramesh
24 Oct 2018
2 min read
Node v11.0.0 is released. The focus of this current release is primarily towards improving internals, and performance. It is an update to the stable V8 7.0. Build and console changes in Node v11.0.0 Build: FreeBSD 10 supported is removed. child_process: The default value of the windowsHide option is now to true. console: The console.countReset() function will emit a warning if the timer being reset does not exist. If a timer already exists, console.time() will no longer reset it. Dependency and http changes Under dependencies, the Chrome V8 engine has been updated to the v7.0. fs: The fs.read() method now needs a callback. The fs.SyncWriteStream utility was deprecated previously, it has now been removed. http: In Node v11.0.0 the http, https, and tls modules use the WHATWG URL parser by default. General changes In general changes, process.binding() has been deprecated and can no longer be used. Userland code using process.binding() should re-evaluate its use initiate migration. There is an experimental implementation of queueMicrotask() added. Internal changes Under internal changes, the Windows performance-counter support has been removed. The --expose-http2 command-line option has also been removed. In Timers, interval timers will be rescheduled even if previous interval gave an error. The nextTick queue will be run after each immediate and timer. Changes in utilities The WHATWG TextEncoder and TextDecoder APIs are now global. The util.inspect() method’s output size is limited to 128 MB by default. When NODE_DEBUG is set for either http or http2, a runtime warning will be emitted. Some other additions Some other utilities have been added like: '-z relro -z now' linker flags internal PriorityQueue class InitializeV8Platform function string-decoder fuzz test new_large_object_space heap space dns memory error test warnings when NODE_DEBUG is set as http/http2 Inspect suffix to BigInt64Array elements For more details and a complete list of changes, visit the Node website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 4110
article-image-netlify-raises-30-million-new-application-delivery-network-replace-servers
Savia Lobo
11 Oct 2018
3 min read
Save for later

Netlify raises $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management

Savia Lobo
11 Oct 2018
3 min read
On Tuesday, Netlify, a San Francisco based company announced that it has raised $30 million in a series B round of funding for a new platform named as ‘Application Delivery Network’ designed specifically to assist web developers in building newer applications. The funding was led by Kleiner Perkins’ Mamoon Hamid with Andreessen Horowitz and the founders of Slack, Yelp, GitHub and Figma participating. Founded in 2015, Netlify provides all-in-one workflow to build, deploy, and manage modern web projects. This new platform for the web, will enable all content and applications to be created directly on a global network, thus, bypassing the need to ever setup or manage servers. Vision behind the global ‘Application Delivery Network’ Netlify has assisted a lot of organizations to dump web servers with no requirement of infrastructure. It also replaced a need for CDN and thus a lot of servers. In order to implement the new architecture, Netlify provides developers with a git-centric workflow that supports APIs and microservices. Netlify’s Application Delivery Network removes the last dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. Mathias Biilmann, Netlify Founder and CEO, said that more amount of devices brings additional complications. He further adds, “Customers have come to us with AWS environments that have dozens or even hundreds of them for a single application. Our goal is to remove the requirement for those servers completely. We’re not trying to make managing infrastructure easy. We want to make it totally unnecessary.” Investor’s take Talking about the investment in Netlify, Mamoon Hamid, Managing Member and General Partner at the venture capital firm Kleiner Perkins, said, “In a sense, they are completely rethinking how the modern web works. But the response to what they are doing has been overwhelming. Most of the top projects in this developer space have already migrated their sites: React, Vue, Gatsby, Docker, and Kubernetes are all Netlify powered. The early traction really shows they hit a nerve with the developer community.” To top it up as an icing on the cake, Chris Coyier, CSS expert and co-founder of Codepen says, “This is where the web is going. Netlify is just bringing it to us all a lot faster. With all the innovation in the space, this is an exciting time to be a developer.” What users say about Netlify In a discussion thread on Hacker News, users absolutely love how Netlify provides a helping hand to all the web developers in their day-to-day web application based tasks. Some of the features mentioned by users include: Netlify provides users with forms, lambdas and very easy testing by just pushing to another git branch It provides the ability to publish using a simple `git push` and does all the rest of the work including assets minification and bundling. Netlify connects to GitHub and rebuilds your site automatically when a change is made in the master branch. Users just have to connect their GitHub account with their UI. To know more about this news in detail, read Netlify’s official announcement. How to build a real-time data pipeline for web developers – Part 1 [Tutorial] How to build a real-time data pipeline for web developers – Part 2 [Tutorial] Google wants web developers to embrace AMP. Great news for users, more work for developers
Read more
  • 0
  • 0
  • 2485

article-image-node-js-v10-12-0-current-released
Sugandha Lahoti
11 Oct 2018
4 min read
Save for later

Node.js v10.12.0 (Current) released

Sugandha Lahoti
11 Oct 2018
4 min read
Node.js v10.12.0 was released, yesterday, with notable changes to assert, cli, crypto, fs, and more. However, the Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Hence throughout the v10.12.0 documentation are indications of a section's stability. Let’s look at the notable changes which are stable. Assert module Changes have been made to assert. The assert module provides a simple set of assertion tests that can be used to test invariants. It comprises of a strict mode and a legacy mode, although it is recommended to only use strict mode. In Node.js v10.12.0, the diff output is now improved by sorting object properties when inspecting the values that are compared with each other. Changes to cli The command line interface in Node.js v10.12.0 has two improvements: The options parser now normalizes _ to - in all multi-word command-line flags, e.g. --no_warnings has the same effect as --no-warnings. It also includes bash completion for the node binary. Users can generate a bash completion script with run node --completion-bash. The output can be saved to a file which can be sourced to enable completion. Crypto Module The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. In Node.js v10.12.0, crypto adds support for PEM-level encryption. It also supports API asymmetric key pair generation. The new methods crypto.generateKeyPair and crypto.generateKeyPairSync can be used to generate public and private key pairs. The API supports RSA, DSA and EC and a variety of key encodings (both PEM and DER). Improvements to file system The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions. Node.js v10.12.0 adds a recursive option to fs.mkdir and fs.mkdirSync. On setting this option to true, non-existing parent folders will be automatically created. Updates to Http/2 The http2 module provides an implementation of the HTTP/2 protocol. The new node.js version adds support for a 'ping' event to Http2Session that is emitted whenever a non-ack PING is received. Support is also added for the ORIGIN frame.  Also, nghttp2 is updated to v1.34.0. This adds RFC 8441 extended connect protocol support to allow the use of WebSockets over HTTP/2. Changes in module In the Node.js module system, each file is treated as a separate module. Module has also been updated in v10.12.0. It adds module.createRequireFromPath(filename). This new method can be used to create a custom require function that will resolve modules relative to the filename path. Improvements to process The process object is a global that provides information about, and control over, the current Node.js process. Process adds a 'multipleResolves' process event that is emitted whenever a Promise is attempted to be resolved multiple times. Updates to url Node.js v10.12.0 adds url.fileURLToPath(url) and url.pathToFileURL(path). These methods can be used to correctly convert between file: URLs and absolute paths. Changes in Utilities The util module is primarily designed to support the needs of Node.js' own internal APIs. The changes in Node.js v10.12.0 include: A new sorted option is added to util.inspect(). If set to true, all properties of an object and Set and Map entries will be sorted in the returned string. If set to a function, it is used as a compare function. The util.instpect.custom symbol is now defined in the global symbol registry as Symbol.for('nodejs.util.inspect.custom'). Support for BigInt numbers in util.format() are also added. Improvements in V8 API The V8 module exposes APIs that are specific to the version of V8 built into the Node.js binary. A number of V8 C++ APIs in v10.12.0 have been marked as deprecated since they have been removed in the upstream repository. Replacement APIs are added where necessary. Changes in Windows The Windows msi installer now provides an option to automatically install the tools required to build native modules. You can find the list of full changes on the Node.js Blog. Node.js and JS Foundation announce intent to merge; developers have mixed feelings. Node.js announces security updates for all their active release lines for August 2018. Deploying Node.js apps on Google App Engine is now easy.
Read more
  • 0
  • 0
  • 6899