Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Server-Side Web Development

85 Articles
article-image-smoke-amazons-new-lightweight-server-side-service-framework
Savia Lobo
05 Oct 2018
3 min read
Save for later

Smoke: Amazon’s new lightweight server-side service framework

Savia Lobo
05 Oct 2018
3 min read
Today, Amazon released Smoke Framework, a lightweight server-side service framework written in the Swift programming language. The Smoke Framework uses SwiftNIO for its networking layer by default. This framework can be used for REST-like or RPC-like services and in conjunction with code generators from service models such as Swagger/OpenAPI. The framework also has a built-in support for a JSON-encoded request and response payloads. Working of Swift-based Smoke Framework The Smoke Framework provides the ability to specify handlers for operations the service application needs to perform. When a request is received, the framework will decode the request into the operation's input. When the handler returns, its response (if any) will be encoded and sent in the response. Each invocation of a handler is also passed an application-specific context, allowing application-scope entities such as other service clients to be passed to operation handlers. Using the context allows operation handlers to remain pure functions (where its return value is determined by the function's logic and input values) and hence easily testable. Parts of the Smoke framework The Operation Delegate The Operation Delegate handles specifics such as encoding and decoding requests to the handler's input and output. The Smoke Framework provides the JSONPayloadHTTP1OperationDelegate implementation that expects a JSON encoded request body as the handler's input and returns the output as the JSON encoded response body. The Operation Function By default, the Smoke framework provides four function signatures that this function can conform to ((InputType, ContextType) throws -> ()): Synchronous method with no output. ((InputType, ContextType) throws -> OutputType): Synchronous method with output. ((InputType, ContextType, (Swift.Error?) -> ()) throws -> ()): Asynchronous method with no output. ((InputType, ContextType, (SmokeResult<OutputType>) -> ()) throws -> ()): Asynchronous method with output. Error handling By default, any errors thrown from an operation handler will fail the operation and the framework will return a 500 Internal Server Error to the caller (the framework also logs this event at Error level). This behavior prevents any unintentional leakage of internal error information. Testing The Smoke Framework has been designed to make testing of operation handlers straightforward. It is recommended that operation handlers are pure functions (where its return value is determined by the function's logic and input values). In this case, the function can be called in unit tests with appropriately constructed input and context instances. To know more about this in detail, visit Smoke framework’s official GitHub page. ABI stability may finally come in Swift 5.0 Swift 4.2 releases with language, library and package manager updates! What’s new in Vapor 3, the popular Swift based web framework
Read more
  • 0
  • 0
  • 3057

article-image-django-2-1-2-fixes-major-security-flaw-that-reveals-password-hash-to-view-only-admin-users
Bhagyashree R
04 Oct 2018
2 min read
Save for later

Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users

Bhagyashree R
04 Oct 2018
2 min read
On Monday, Django 2.1.2 was released, which has addressed a security issue regarding password hash disclosure. Along with that, this version fixes several other bugs in 2.1.1 and also comes with the latest string translations from Transifex. Users password hash visible to “view only” admin users In Django 2.1.1, the admin users who had permissions to change the user model could see a part of the password hash in the change form. Also, admin users with “view only” permission to the user model were allowed to see the entire hash. This could prove to be a big problem if the password is weak or your site uses weaker password hashing algorithms such as MD5 or SHA1. This vulnerability has been named CVE-2018-16984 since 13th September, 2018. This issue has been solved in this new security release. Bug fixes A  bug is fixed where lookup using F() on a non-existing model field didn't raised FieldError. The migrations loader now ignores the files starting with a tilde or underscore. Migrations correctly detects changes made to Meta.default_related_name. Support for cx_Oracle 7 is added. Quoting of unique index names is now fixed. Sliced queries with multiple columns with the same name will not result in crash on Oracle 12.1 anymore. A crash is fixed when a user with the view only (but not change) permission made a POST request to an admin user change form. To read the release notes of Django, head over to its official website. Django 2.1 released with new model view permission and more Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 3540

article-image-github-addresses-technical-debt-now-runs-on-rails-5-2-1
Bhagyashree R
01 Oct 2018
3 min read
Save for later

GitHub addresses technical debt, now runs on Rails 5.2.1

Bhagyashree R
01 Oct 2018
3 min read
Last week, GitHub announced that their main application is now running on Rails 5.2.1. Along with this upgrade, they have also improved the overall codebase and cleaned up technical debt. How GitHub upgraded to Rails 5.2.1? The upgrade started out as a hobby with no dedicated team assigned. As they made progress and gained traction it became a priority. They added the ability to dual boot the application in multiple versions of Rails, instead of using a long running branch to upgrade Rails. Two Gemfile.lock were created: Gemfile.lock for the current version Gemfile_next.lock for the next version This dual booting enabled the developers to regularly deploy changes for the next version to GitHub without any affect on how production works. This was done by conditionally loading the code: if GitHub.rails_3_2? ## 3.2 code (i.e. production a year and a half ago) elsif GitHub.rails_4_2? # 4.2 code else # all 5.0+ future code, ensuring we never accidentally # fall back into an old version going forward end To roll out the Rails upgrade they followed a careful and iterative process: The developers first deployed to their testing environment and requested volunteers from each team to click test their area of the codebase to find any regressions the test suite missed. These regressions were then fixed and deployment was done in off-hours to a percentage of production servers. During each deploy, data about exceptions and performance of the site was collected. With this information they fixed bugs that came up and repeat those steps until the error rate was low enough to be considered equal to the previous version. Finally, they merged the upgrade once they could deploy to full production for 30 minutes at peak traffic with no visible impact. This process allowed them to deploy 4.2 and 5.2 with minimal customer impact and no down time. Key lessons they learned during this upgradation Regular upgradation Upgrading will be easier if you are closer to a new version of Rails. This also encourages your team to fix bugs in Rails instead of monkey-patching the application. Keeping an upgrade infrastructure Needless to say, there will always be a new version to upgrade to. To keep up with the new versions, add a build to run against the master branch to catch bugs in Rails and in your application early. This make upgrades easier and increase your upstream contributions. Regularly address technical debt Technical debt refers to the additional rework you and your team have to do because of choosing an easy solution now instead of using a better approach that would take longer. Refraining from messing with a working code could cause a bottleneck for upgrades. To avoid this try to prevent coupling your application logic too closely to your framework. The line where your application logic ends and your framework begins should be clear. Be sure to assume that things will breaks Upgrading a large and trafficked application like GitHub is not easy. They did face issues with CI, local development, slow queries, and other problems that didn’t show up in their CI builds or click testing. Read the full announcement on GitHub Engineering blog. GitHub introduces ‘Experiments’, a platform to share live demos of their research projects GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 2631
Banner background image

article-image-introducing-wasmjit-a-kernel-mode-webassembly-runtime-for-linux
Bhagyashree R
26 Sep 2018
2 min read
Save for later

Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux

Bhagyashree R
26 Sep 2018
2 min read
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules. What are the benefits of Wasmjit? Improved performance: Using Wasmjit you will be able to run WebAssembly modules in kernel-space (ring 0). This will provide access to system calls as normal function calls, which eliminates the user-kernel transition overhead. This also avoids the scheduling overheads from swapping page tables. This provides a boost in performance for syscall-bound programs like web servers or FUSE file systems. No need to run an entire browser: It also comes with a host environment for running in user-space on POSIX systems. This will allow running WebAssembly modules without having to run an entire browser. What tools do you need to get started? The following are the tools you require to get started with Wasmjit: A standard POSIX C development environment with cc and make Emscripten SDK Optionally, you can install kernel headers on Linux, the linux-headers-amd64 package on Debian, kernel-devel on Fedora What’s in the future? Wasmjit currently supports x86_64 and can run a subset of Emscripten-generated WebAssembly on Linux, macOS, and within the Linux kernel as a kernel module. In coming releases we will see more implementations and improvements along the following lines: Enough Emscripten host-bindings to run nginx.wasm Introduction of an interpreter Rust-runtime for Rust-generated wasm files Go-runtime for Go-generated wasm files Optimized x86_64 JIT arm64 JIT macOS kernel module What to consider when using this runtime? Wasmjit uses vmalloc(), a function for allocating a contiguous memory region in the virtual address space, for code and data section allocations. This prevents those pages from ever being swapped to disk resulting in indiscriminate access to the /dev/wasm device. This can make a system vulnerable to denial-of-service attacks. To mitigate this risk, in future, a system-wide limit on the amount of memory used by the /dev/wasm device will be provided. To get started with Wasmjit, check out its GitHub repository. Why is everyone going crazy over WebAssembly? Unity Benchmark report approves WebAssembly load times and performance in popular web browsers Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 6350

article-image-haproxy-introduces-stick-tables-for-server-persistence-threat-detection-and-collecting-metrics
Bhagyashree R
24 Sep 2018
3 min read
Save for later

HAProxy shares how you can use stick tables for server persistence, threat detection, and collecting metrics

Bhagyashree R
24 Sep 2018
3 min read
Yesterday, HAProxy published an article discussing stick tables, an in-memory storage. Introduced in 2010, it allows you to track client activities across requests, enables server persistence, and collects real-time metrics. It is supported in both the HAProxy Community and Enterprise Edition. You can think of stick tables as a type of key-value store. The key here represents what you track across requests, such as a client IP, and the values are the counters that, for the most part, HAProxy takes care of calculating for you. What are the common use cases of stick tables? StackExchange realized that along with its core functionality, server persistence, stick tables can also be used for many other scenarios. They sponsored its developments and now stick tables have become an incredibly powerful subsystem within HAProxy. Stick tables can be used in many scenarios; however, its main uses include: Server persistence Stick tables were originally introduced to solve the problem of server persistence. HTTP requests are stateless by design because each request is executed independently, without any knowledge of the requests that were executed before it. These tables can be used to store a piece of information, such as an IP address, cookie, or range of bytes in the request body, and associate it with a server. Next time when HAProxy sees new connections using the same piece of information, it will forward the request on to the same server. This way it can help in tracking user activities between one request and add a mechanism for storing events and categorizing them by client IP or other keys. Bot detection We can use stick tables to defend against certain types of bot threats. It finds its application in preventing request floods, login brute force attacks, vulnerability scanners, web scrapers, slow loris attacks, and many more. Collecting metrics With stick tables, you can collect metrics to understand what is going on in HAProxy, without enabling logging and having to parse the logs. In this scenario Runtime API is used, which can read and analyze stick table data from the command line, a custom script or executable program. You can visualize this data using any dashboard of your choice. You can also use the fully-loaded dashboard, which comes with HAProxy Enterprise Edition for visualizing stick table data. These were a few of the use cases where stick tables can be used. To get a clear understanding of stick tables and how they are used, check out the post by HAProxy. Update: Earlier the article said, "Yesterday (September 2018), HAProxy announced that they are introducing stick tables." This was incorrect as pointed out by a reader, stick tables have been around since 2010. The article is now updated to reflect the same.    Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] How to create a standard Java HTTP Client in ElasticSearch Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 4648

article-image-mojolicious-8-0-a-web-framework-for-perl-released-with-new-promises-and-roles
Savia Lobo
18 Sep 2018
2 min read
Save for later

Mojolicious 8.0, a web framework for Perl, released with new Promises and Roles

Savia Lobo
18 Sep 2018
2 min read
Mojolicious, a next generation web framework for the Perl programming language announced its upgrade to the latest 8.0 version. Mojolicious 8.0 was announced at the Mojoconf in Norway held from 6th to 7th September 2018. This release is codenamed as ‘Supervillain’ and is by far the major release in Mojolicious. Mojolicious allows users to easily grow single file prototypes into well-structured MVC web applications. It is a powerful web development toolkit, that one can use for all kinds of applications, independently of the web framework. Many companies such as Alibaba Group, IBM, Logitech, Mozilla, and others rely on Mojolicious to develop new code bases. Even companies like Bugzilla are getting themselves ported to Mojolicious. The Mojolicious community has decided to make a few organizational changes, to support the continuous growth. This includes: All new development will be consolidated in a single GitHub organization. Mojolicious’ official IRC channel named say hi! that has almost 200 regulars will be moving to Freenode (#mojo on irc.freenode.net). This will make it easier for people not yet part of the Perl community to get involved. Some highlights of the Mojolicious 8.0 Promises/A+ Mojolicious 8.0 includes Promises/A+, a new module and pattern for working with event loops. A promise represents the eventual result of an asynchronous operation. Roles and subprocess The version 8.0 now includes roles, a new way to extend Mojo classes. Also, the subprocesses can now mix event loops and computationally expensive tasks. Placeholder types and Mojo::File With the placeholder types, one can avoid repetitive routes. Whereas the Mojo::File, is the brand new module for dealing with file systems. Cpanel::JSON::XS and Mojo::PG With the Cpanel::JSON::XS, users can process JSON at a much faster rate now. The Mojo::PG includes many new SQL::Abstract extensions for Postgres features. To know more about Mojolicious 8.0 in detail, visit its GitHub page. Warp: Rust’s new web framework for implementing WAI (Web Application Interface) What’s new in Vapor 3, the popular Swift based web framework Beating jQuery: Making a Web Framework Worth its Weight in Code
Read more
  • 0
  • 0
  • 5454
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cloudflares-decentralized-vision-of-the-web-interplanetary-file-system-ipfs-gateway-to-create-distributed-websites
Melisha Dsouza
18 Sep 2018
4 min read
Save for later

Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites

Melisha Dsouza
18 Sep 2018
4 min read
The Cloudflare team has introduced Cloudflare’s IPFS Gateway which will make accessing content from the InterPlanetary File System (IPFS) easy and quick without having to install and run any special software on a user’s computer. The gateway which supports new distributed web technologies is hosted at cloudflare-ipfs.com. The team asserts that this will lead to highly-reliable and security-enhanced web applications. A brief gist of IPFS When a user accesses a website from the browser, it tracks down the centralized repository for the website’s content. It then sends a request from the user’s computer to that origin server, and that server sends the content back to the user's computer. However, this centralization mechanism makes it impossible to keep content online if the origin servers rolls back the data. If the origin server faces a downtime or the site owner decides to take down the data, the content stands unavailable. On the other hand, IPFS is a distributed file system that allows users to share files that will be distributed to other computers- throughout the networked file system. This means that a user’s content is stored on all the nodes of the network and data can be safely backed up. Key Differences between IPFS and the traditional Web #1 Free caching and serving of content IPFS provides free caching and serving of content. Anyone can sign up their computer to be a node in the system and start serving data. On the flipside, the traditional web relies on big hosting providers to store content and serve it to the rest of the web. Setting up a website with these providers costs money. #2 Content addressed data Rather than location-addressed data, IPFS focuses on content addressed data. In the traditional web, when a user navigates to a website, it fetches data stored at the websites IP address. The server sends back the relevant information from that IP. With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents. When a user requests for a piece of data in IPFS, they request it by its hash .i.e  content that has a hash value of, for example, QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy Why is Cloudflare’s IPFS Gateway Important? The IPFS increases the resilience of the network. The content with a hash of-QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy could be stored on dozens of nodes. So, if one of the nodes that was storing the content goes down, the network will just look for the content on another node. In addition to resilience, there is an automatic level of security introduced in the system. If the data requested by the user was tampered with during transit, the hash value the user gets will be different than the hash that he/she had asked for. This means that the system has a built-in way of knowing whether or not content has been tampered with. Users can access any of the billions of files stored on IPFS from their browser. Using Cloudflare’s gateway, they can also build a website hosted entirely on IPFS available to users at a custom domain name. Any website connected to IPFS gateway will be provided with a free SSL certificate. IPFS is embracing a new, decentralized vision of the web. Users will be able to create static web sites- containing information that cannot be censored by governments, companies, or other organizations- that are served entirely over IPFS. To know more about this announcement, head over to Cloudflare’s official Blog. 7 reasons to choose GraphQL APIs over REST for building your APIs Laravel 5.7 released with support for email verification, improved console testing Javalin 2.0 RC3 released with major updates!
Read more
  • 0
  • 0
  • 3572

article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 3569

article-image-welcome-express-gateway-1-11-0-a-microservices-api-gateway-on-express-js
Bhagyashree R
24 Aug 2018
2 min read
Save for later

Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js

Bhagyashree R
24 Aug 2018
2 min read
Express Gateway 1.11.0 has been released after adding an important feature for the proxy policy and some bug fixes. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. What is new in this version? Additions New parameter called stripPath: Support for a new parameter called stripPath has been added to the Proxy Policy for Express Gateway. Its default value is false. You can now completely own both the URL space of your backend server as well the one exposed by Express Gateway. Official Helm chart: An official Helm chart has been added that enables you to install Express Gateway on your Rancher or Kubernetes Cluster with a single command. Bug Fixes The base condition schema is now correctly returned by the /schemas Admin API Endpoint so that the external clients can use it and resolve its references correctly. Previously, invalid configuration could be sent to the gateway through the Admin API when using Express Gateway in production. The gateway was correctly validating the gateway.config content, but it wasn't validating all the policies inside it. This bug fix was done to  make sure when an Admin API call that is modifying the configuration is done, the validation should be triggered so that we do not persist on disk a broken configuration file. Fixed a missing field in oauth2-introspect JSON Schema. For maintaining consistency, the keyauth schema name is now correctly named key-auth. Miscellaneous changes Unused migration framework has been removed. The X-Powered-By header is now disabled for security reasons. The way of starting Express Gateway in official Docker file is changed. Express Gateway is not wrapped in a bash command before being run. The reason is that the former command allocates an additional /bin/sh process, the latter does not. In this article we looked through some of the updates introduced in Express Gateway 1.11.0. To know more on this new update head over to their GitHub repo. API Gateway and its need Deploying Node.js apps on Google App Engine is now easy How to build Dockers with microservices
Read more
  • 0
  • 0
  • 4087

article-image-google-app-engine-standard-environment-beta-now-includes-php-7-2
Savia Lobo
23 Aug 2018
2 min read
Save for later

Google App Engine standard environment (beta) now includes PHP 7.2

Savia Lobo
23 Aug 2018
2 min read
Google Cloud announced the availability of their latest Second Generation runtime, PHP 7.2 on the App Engine standard environment, on Monday. This version is available in beta for users to build and deploy reliable applications with improved flexibility. PHP 7.2 is open and idiomatic as compared to other second Generation runtimes on App Engine standard such as Python 3.7 and Node.js 8. This means one can run popular frameworks such as Symfony, Laravel, and even WordPress on PHP 7.2. With PHP 7.2 on the App Engine standard environment, users can easily build and deploy an application, which can run reliably under heavy load and large amounts of data. The applications will run within its own secure, reliable environment. Thus, making it independent of the hardware, operating system, or the physical location of the server. Benefits of Google App Engine standard environment for PHP 7.2 Faster auto-scaling: Being on the Google App Engine standard environment allows running instances in seconds. This allows the app to handle sudden bursts in demand. Faster deployment times of about less than a minute for PHP apps; One can also scale apps down to zero instances if required. This makes it perfect for apps to operate at any scale. No restrictions in running code: As PHP 7.2 is a Second Generation runtime, one can run any code without restrictions. Existing PHP apps and open source libraries will run unmodified. Support for new languages: This is because PHP 7.2 need not custom-modify language runtimes to work with App Engine. Thus, support for new languages can be launched quickly. Supports Google Cloud client libraries: One can integrate Google Cloud services into their apps and run it on App Engine, Compute Engine, or any other platform. To know more about this news in detail and to get started with PHP 7.2 for App Engine visit Google Cloud blog. Common PHP Scenarios Oracle releases GraphPipe: An open source tool that standardizes machine learning model deployment Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 1
  • 4256
article-image-javalin-2-0-0-is-now-stable
Bhagyashree R
23 Aug 2018
2 min read
Save for later

Javalin 2.0.0 is now stable

Bhagyashree R
23 Aug 2018
2 min read
Earlier this month, the launch of Javalin 2.0 RC3 was announced. The team has now removed the “RC” tag and made Javalin 2.0.0 stable. Javalin is a web framework for Kotlin and Java, which is simple, lightweight, interoperable, and flexible. With ~5000 additions and ~5500 deletions reflecting in the gitlog, major changes have been introduced in this version. Most of the changes include the removal of abstraction layers and the completely rewritten implementation of WebSocket and test-suite. To summarize, here are few of the major changes: Additions ETag support and a method for auto-generating ETags Support for WebJars, client-side web libraries packaged into JAR (Java Archive) files. Javalin now has a pac4j implementation. It is an security library for Javalin web applications which supports authentication and authorization. RequestLogger interface ({ ctx, executionTime -> ...}) An option to return 405 instead of 404, listing available methods for the path A set of default responses, so you can throw BadRequestResponse() A CrudHandler to remove some boilerplate from creating standard CRUD APIs Improvements Improved support for Single Page Applications Improved exception handling for async requests You can now easily plug in your own mappers/rendering engines, as JSON and Template functionalities has been modularized. The ctx.render() function now contains all the Template functionality. Default value changes All requests run through an AccessManager now with the default implementation, NOOP URL matching is now case-insensitive by default. You can call app.enableCaseSensitiveUrls() if you want to disable it. Request-caching is now limited to 4kb Server now has a LowResourceMonitor attached To know more on the Javalin 2.0.0 updates head over to their official website. If you are planning to migrate from 1.x to 2.x, you can refer to the migration guide. Javalin 2.0 RC3 released with major updates! Kotlin 1.3 M1 arrives with coroutines, new experimental features like unsigned integer types Kotlin/Native 0.8 recently released with safer concurrent programming
Read more
  • 0
  • 0
  • 2642

article-image-python-3-7-as-the-second-generation-google-app-engine-standard-runtime
Sugandha Lahoti
09 Aug 2018
2 min read
Save for later

Python 3.7 beta is available as the second generation Google App Engine standard runtime

Sugandha Lahoti
09 Aug 2018
2 min read
Google has announced the availability of Python 3.7 in beta on the App Engine standard environment. Developers can now easily run their web apps using up-to-date versions of popular languages, frameworks, and libraries, with Python being one of them. The Second Generation runtimes remove previous App Engine restrictions, giving developers the ability to write portable web apps and microservices. Now web apps can take full advantage of App Engine features such as auto-scaling, built-in security, and pay-per-use billing model. Python 3.7 was introduced as one of the new Second Generation runtimes at Cloud Next. Python 3.7 runtime brings developers up-to-date with the language community's progress. As a Second Generation runtime, it enables a faster path to continued runtime updates. It also supports arbitrary third-party libraries, including those that rely on C code and native extensions. The new Python 3.7 runtime also supports the Google Cloud client libraries. Developers can integrate GCP services into their app, and run it on App Engine, Compute Engine or any other platform. LumApps, a Paris-based provider of enterprise Intranet software, has chosen App Engine to optimize for scale and developer productivity. Elie Mélois, CTO & Co-founder, LumApps says, "With the new Python 3.7 runtime on App Engine standard, we were able to deploy our apps very quickly, using libraries that we wanted such as scikit. App Engine helped us scale our platform from zero to over 2.5M users, from three developers to 40—all this with only one DevOps person! " Check out the documentation to start using Python 3.7 today on the App Engine standard environment. Deploying Node.js apps on Google App Engine is now easy Hosting on Google App Engine Should you move to Python 3? 7 Python experts’ opinions
Read more
  • 0
  • 0
  • 3560

article-image-javalin-2-0-rc3-released-with-major-updates
Bhagyashree R
06 Aug 2018
3 min read
Save for later

Javalin 2.0 RC3 released with major updates!

Bhagyashree R
06 Aug 2018
3 min read
Javalin is a web framework for Java and Kotlin which is simple, lightweight, interoperable, and flexible. With the major changes introduced in the codebase, the team has now announced the release of Javalin 2.0 RC3. Some of the updates include removal of some abstraction layers, using Set instead of List, removal of CookieBuilder, Javalin lambda replacing Javalin Jetty, and more. Updates in the Javalin 2.0 RC3 Package structure improvements The following table lists the packages whose structure have been updated in this release: Javalin 1.7 Javalin 2.0 RC3 io.javalin.embeddedserver.jetty.websocket io.javalin.websocket io.javalin.embeddedserver.Location io.javalin.staticfiles.Location io.javalin.translator.json.JavalinJsonPlugin io.javalin.json.JavalinJson io.javalin.translator.json.JavalinJacksonPlugin io.javalin.json.JavalinJackson io.javalin.translator.template.JavalinXyzPlugin io.javalin.rendering.JavalinXyz io.javalin.security.Role.roles io.javalin.security.SecurityUtil.roles io.javalin.ApiBuilder io.javalin.apibuilder.ApiBuilder io.javalin.ApiBuilder.EndpointGrooup io.javalin.apibuilder.EndpointGrooup Changes in the server defaults Earlier, when we wanted to customize our embedded server, we used to write the following: app.embeddedServer(new EmbeddedJettyFactory(() -> new Server())) // v1 Now with the removal of embedded server abstraction, we can directly write this: app.server(() -> new Server()) // v2 Since the static method Javalin.start(port) has been removed, use Javalin.create().start(0) instead. defaultCharset() method has been removed The following are enabled by default: Dynamic gzip, turn it off with disableDynamicGzip() Request-caching is now limited to 4kb Server now has a LowResourceMonitor attached URLs are now case-insensitive by default, meaning Javalin will treat /path and /Path as the same URL. This can be disabled with app.enableCaseSensitiveUrls(). Javalin lambda replaces Jetty WebSockets Since Jetty WebSockets have limited functionality, it is now replaced with the Javalin lambda WebSockets. AccessManager This is an interface used to set per-endpoint authentication and authorization. Use Set instead of List. It now runs for every single request, but the default-implementation does nothing. Context Context is the object, which provides you with everything needed to handle an http-request. The following updates are introduced in Context: ctx.uri() has been removed, it was a duplicate of ctx.path() ctx.param() is replaced with ctx.pathParam() ctx.xyzOrDefault("key") are changed to ctx.xyz("key", "default") ctx.next() has been removed ctx.request() is now ctx.req ctx.response() is now ctx.res All ctx.renderXyz methods are now just ctx.render(), since the correct engine is chosen based on extension ctx.charset(charset) has been removed You can use the Cookie class in place of CookieBuilder, as it is now removed Now List<T> is returned instead of Array<T> Things that used to return nullable collections now return empty collections instead Kotlin users can now do ctx.body<MyClass>() to deserialize json In this article we looked at some of the major updates in Javalin 2.0. To know more, head over to their GitHub repository. Kotlin 1.3 M1 arrives with coroutines, and new experimental features like unsigned integer types Top frameworks for building your Progressive Web Apps (PWA) Kotlin/Native 0.8 recently released with safer concurrent programming
Read more
  • 0
  • 0
  • 3127
article-image-warp-rusts-new-web-framework
Melisha Dsouza
06 Aug 2018
3 min read
Save for later

Warp: Rust's new web framework

Melisha Dsouza
06 Aug 2018
3 min read
Warp is a new Rust web framework. Built by Sean McArthur and Carl Lerche, it's a tool for building and managing web servers. More specifically, it was designed to give developers more control over how they to configure routes within their services. It's worth pointing out that Rust's Warp shouldn't be confused with Haskell's Warp - in the Haskell world, Warp is a lightweight web server for WAI applications. This article was amended 7.25.2019 to clarify that Rust's Warp framework and Haskell's Warp frameworks are different. What's the thinking behind Rust's Warp framework? In a blog post announcing the framework, McArthur explains that the inspiration for Warp came out of his experience working with many different frameworks and tools - most recently Node.js. He writes: "I found that I often times need to configure predicates, like certain headers required, query parameters needed, etc, and sometimes, I need to configure that a set of routes should be 'mounted' at a different path, and possibly want certain predicates there too. I noticed the concept of mounting or sub-routes or sub-resources or whatever the framework calls them didn’t feel… natural, at least to me." With this challenge setting the context for Warp, McArthur's love of Rust and the highly functional aspect of Scala tools like Finch and Akka helped to lay the technical foundations for the web framework. Central to the framework are filters. Read next: Will Rust replace C++? What are filters in the Warp web framework? Filters are a feature that makes configuring endpoints easier. McArthur explains by saying they are "a function that can operate on some input... and returns some output, which could be some app-specific type you wish to pass around, or can be some reply to send back as an HTTP response." The advantage of this is that if you are trying to "piece together data from several different places of a request before you have your domain object" you can treat each source as a 'filter' and combine them in a relatively straightforward manner. McArthur repeatedly uses the word 'natural' - to put it another way, it makes things easier and cleaner for the developer. Read next: Rust 1.28 is here with global allocators, nonZero types and more The Rust ecosystem is growing It's not news that Rust is a hugely popular programming language. In this year's Stack Overflow survey, Rust was listed as the most loved language by respondents (3 years running!). However, it hasn't seen extensive and rapid growth despite its advantages - with a growing ecosystem of tools like Warp that could well change over the next couple of years.
Read more
  • 0
  • 0
  • 6021

article-image-django-2-1-released-with-new-model-view-permission-and-more
Sugandha Lahoti
06 Aug 2018
3 min read
Save for later

Django 2.1 released with new model view permission and more

Sugandha Lahoti
06 Aug 2018
3 min read
Django 2.1 has been released with changes to Model view permissions, Database backend API, and additional new features. Django 2.1 supports Python 3.5, 3.6, and 3.7. Django 2.1 is a time-based release. The schedule followed was: May 14, 2018 Django 2.1 alpha; feature freeze. June 18 Django 2.1 beta; non-release blocking bug fix freeze. July 16 Django 2.1 RC 1; translation string freeze. ~August 1 Django 2.1 final Here is the list of all new features: Model view permission Django 2.1 adds a view permission to the model Meta.default_permissions. This new permission will allow users read-only access to models in the admin. The permission will be created automatically when running migrate. Considerations for the new model view permission With the new “view” permission, existing custom admin forms may raise errors when a user doesn’t have the change permission because the form might access nonexistent fields. If users have a custom permission with a codename of the form can_view_<modelname>, the new view permission handling in the admin will allow view access to the changelist and detail pages for those models. Changes to Database backend API To adhere to PEP 249, exceptions where a database doesn’t support a feature are changed from NotImplementedError to django.db.NotSupportedError. The allow_sliced_subqueries database feature flag is renamed to allow_sliced_subqueries_with_in. The DatabaseOperations.distinct_sql() now requires an additional params argument and returns a tuple of SQL and parameters instead of a SQL string. The DatabaseFeatures.introspected_boolean_field_type is changed from a method to a property. Dropped support for MySQL 5.5 and PostgreSQL 9.3 Django 2.1 marks the end of upstream support for MySQL 5.5. It now supports MySQL 5.6 and higher. Similarly, it ends the support for PostgreSQL 9.3. Django 2.1 supports PostgreSQL 9.4 and higher. SameSite cookies The cookies used for django.contrib.sessions, django.contrib.messages, and Django’s CSRF protection now set the SameSite flag to Lax by default. Browsers that respect this flag won’t send these cookies on cross-origin requests. Other Features It removes BCryptPasswordHasher from the default PASSWORD_HASHERS setting. The minimum supported version of mysqlclient is increased from 1.3.3 to 1.3.7. Support for SQLite < 3.7.15 is removed. The multiple attribute rendered by the SelectMultiple widget now uses HTML5 boolean syntax rather than XHTML’s multiple="multiple". The local-memory cache backend now uses a least-recently-used (LRU) culling strategy rather than a pseudo-random one. The new json_script filter safely outputs a Python object as JSON, wrapped in a <script> tag, ready for use with JavaScript. These are just a select few updates in available in Django 2.1. The release notes cover all the new features in detail. Getting started with Django RESTful Web Services Getting started with Django and Django REST frameworks to build a RESTful app Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 3670