Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-microsoft-announces-decentralized-identity-in-partnership-with-dif-and-w3c-credentials-community-group
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Microsoft published a white paper on Decentralized Identity (DID) solution. These identities are user-generated, self-owned, globally unique identifiers rooted in decentralized systems. Over the past 18 months, Microsoft has been working towards building a digital identity system using blockchain and other distributed ledger technologies. With these identities aims to enhance personal privacy, security, and control. Microsoft has been actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. They are working with these groups to identify and develop critical standards. Together they plan to establish a unified, interoperable ecosystem that developers and businesses can rely on to build more user-centric products, applications, and services. Why decentralized identity (DID) is needed? Nowadays, people use digital identity at work, at home, and across every app, service, and device. Access to these digital identities such as email addresses and social network IDs can be removed at any time by the email provider, social network provider, or other external parties. Users also give permissions to numerous apps and devices, which calls for a high degree of vigilance of tracking who has access to what information. This standards-based decentralized identity system empowers users and organizations to have greater control over their data. This system addresses the problem of users granting broad consent to countless apps and services. It provides them a secure encrypted digital hub where they can store their identity data and easily control access to it. What it means for users, developers, and organizations? Benefits for users It enables all users to own and control their identity Provides secure experiences that incorporate privacy by design Design user-centric apps and services Benefits for developers It allows developers to provide users personalized experiences while respecting their privacy Enables developers to participate in a new kind of marketplace, where creators and consumers exchange directly Benefits for organizations Organizations can deeply engage with users while minimizing privacy and security risks Provides a unified data protocol to organizations to transact with customers, partners, and suppliers Improves transparency and auditability of business operations To know more about decentralized identity, read the white paper published by Microsoft. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week
Read more
  • 0
  • 0
  • 10277

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 9080

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 9059
Banner background image

article-image-developers-ask-for-an-option-to-disable-docker-compose-from-automatically-reading-the-env-file
Bhagyashree R
18 Oct 2019
3 min read
Save for later

Developers ask for an option to disable Docker Compose from automatically reading the .env file

Bhagyashree R
18 Oct 2019
3 min read
In June this year, Jonathan Chan, a software developer reported that Docker Compose automatically reads from .env. Since other systems also access the same file for parsing and processing variables, this was creating some confusion resulting in breaking compatibility with other .env utilities. Docker Compose has a "docker-compose.yml" config file used for deploying, combining, and configuring multiple multi-container Docker applications. The .env file is used for putting values in the "docker-compose.yml" file. In the .env file, the default environment variables are specified in the form of key-value pairs. “With the release of 1.24.0, the feature where Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. breaks compatibility with other .env utilities. Although my setup does not use the variables in .env for docker-compose, docker-compose now fails because the .env file does not meet docker-compose's format,” Chan explains. This is not the first time that this issue has been reported. Earlier this year, a user opened an issue on the GitHub repo. He described that after upgrading Compose to 1.24.0-rc1, its automatic parsing of .env file was failing. “I keep export statements in my .env file so I can easily source it in addition to using it as a standard .env. In previous versions of Compose, this worked fine and didn't give me any issues, however with this new update I instead get an error about spaces inside a value,” he explained in his report. As a solution, Chan has proposed, “I propose that you can specify an option to ignore the .env file or specify a different.env file (such as .docker.env) in the docker-compose.yml file so that we can work around projects that are already using the .env file for something else.” This sparked a discussion on Hacker News where users also suggested a few solutions. “This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker,” A user commented. Another user recommended, “You can run docker-compose.yml in any folder in the tree but it only reads the .env from cwd. Just CD into someplace and run docker-compose.” Some users also pointed out the lack of authentication mechanism in Docker Hub. “Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point. As an attacker, I would put a great deal of focus on attacking a company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high,” a user commented. Others recommended to set up a time-based one-time password (TOTP) instead. Check out the reported issue on the GitHub repository. Amazon EKS Windows Container Support is now generally available GKE Sandbox : A gVisor based feature to increase security and isolation in containers 6 signs you need containers  
Read more
  • 0
  • 0
  • 8397

article-image-docker-store-and-docker-cloud-are-now-part-of-docker-hub
Amrata Joshi
14 Dec 2018
3 min read
Save for later

Docker Store and Docker Cloud are now part of Docker Hub

Amrata Joshi
14 Dec 2018
3 min read
Yesterday, the team at Docker announced that Docker Store and Docker Cloud are now part of Docker Hub. This makes the process of finding, storing and sharing container images easy. The new Docker Hub has an updated user experience where Docker certified and verified publisher images are available for discovery and download. Docker Cloud, a service provided by Docker helps users to connect the Docker Cloud to their existing cloud providers like Azure or AWS. Docker store is used for creating a self-service portal for Docker's ecosystem partners for publishing and distributing their software through Docker images. https://twitter.com/Docker/status/1073369942660067328 What’s new in this Docker Hub update? Repositories                                            Source: Docker Users can now view recently pushed tags and automated builds on their repository page. Pagination has now been added to the repository tags. The repository filtering on the Docker Hub homepage has been improved. Organizations and Teams Organization owners can now view the team permissions across all of their repositories at one glance. Existing Docker Hub users can now be added to a team via their email IDs instead of their Docker IDs. Automated Builds Source: Docker Build Caching is now used to speed up builds. It is now possible to add environment variables and run tests in the builds. Automated builds can now be added to existing repositories. Account credentials for organizations like GitHub and BitBucket need to re-linked to the organization for leveraging the new automated builds. Improved container image search Filter by Official, Verified Publisher, and Certified images guarantees a level of quality in the Docker images. Docker Hub provides filter by categories for quick search of images. There is no need of updating any bookmarks on Docker Hub. Verified publisher and certified images The Docker certified and verified publisher images are now available for discovery and download on Docker Hub. Just like official Images, even publisher images have been vetted by Docker. The certified and verified publisher images are provided by the third-party software vendors. Certified images are tested and supported by verified publishers on Docker Enterprise platform. Certified images adhere to Docker’s container best practices. The certified images pass a functional API test suite and also display a unique quality mark “Docker Certified”. Read more about this release on Docker’s blog post. Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format Docker announces Docker Desktop Enterprise Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 8061

article-image-are-debian-and-docker-slowly-losing-popularity
Savia Lobo
12 Mar 2019
5 min read
Save for later

Are Debian and Docker slowly losing popularity?

Savia Lobo
12 Mar 2019
5 min read
Michael Stapelbergs, in his blog, stated why he has planned to reduce his involvement towards Debian software distribution. Stapelbergs is the one who wrote the Linux tiling window manager i3, the code search engine Debian Code Search and the netsplit-free. He said, he’ll reduce his involvement in Debian by, transitioning packages to be team-maintained remove the Uploaders field on packages with other maintainers orphan packages where he is the sole maintainer Stapelbergs mentions the pain points in Debian and why he decided to move away from it. Change process in Debian Debian follows a different change process where packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian. This tool is not necessarily important. “currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages”, Stapelbergs writes. “Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.” Fragmented workflow and infrastructure Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Practically, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Stapelbergs said that after he noticed the workflow fragmentation in the Go packaging team, he also tried fixing this with the workflow changes proposal, but did not succeed in implementing it. Debian is hard to machine-read “While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome.” debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database. There used to be a fedmsg instance for Debian, but it no longer seems to exist. “It is unclear where to get notifications from for new packages, and where best to fetch those packages”, Stapelbergs says. A user on HackerNews said, “I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.” Check out what the entire blogpost by Stapelbergs. Maish Saidel-Keesing believes Docker will die soon Maish Saidel-Keesing, a Cloud & AWS Solutions Architect at CyberArk, Israel, in his blog post mentions, “the days for Docker as a company are numbered and maybe also a technology as well” https://twitter.com/maishsk/status/1019115484673970176 Docker has undoubtedly brought in the popular containerization technology. However, Saidel-Keesing says, “Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.” He also talks about how Open Container Initiative brought with it the Runtime Spec, which opened the door to use something else besides docker as the runtime. Docker is no longer the only runtime that is being used. “Kelsey Hightower - has updated his Kubernetes the hard way over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing”, Saidel-Keesing says. “What triggered me was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools” https://twitter.com/maishsk/status/1098295411117309952 Saidel-Keesing writes, “Lo and behold - no more docker package available in RHEL 8”. He further added, “If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package: podman-docker.noarch : "package to Emulate Docker CLI using podman." To know more on this news, head over to Maish Saidel-Keesing’s blog post. Docker Store and Docker Cloud are now part of Docker Hub Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!
Read more
  • 0
  • 0
  • 7432
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-aws-elastic-load-balancing-support-added-for-redirects-and-fixed-responses-in-application-load-balancer
Natasha Mathur
30 Jul 2018
2 min read
Save for later

AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer

Natasha Mathur
30 Jul 2018
2 min read
AWS announced support for two new actions namely, redirect and fixed-response for elastic load balancing in Application Load Balancer last week. Elastic Load Balancing offers automatic distribution of the incoming application traffic. The traffic is distributed across targets, such as Amazon EC2 instances, IP addresses, and containers. One of the types of load balancers that Elastic load offers is Application Load Balancer. Application Load Balancer simplifies and improves the security of your application as it uses only the latest SSL/TLS ciphers and protocols. It is best suited for load balancing of HTTP and HTTPS traffic and operates at the request level which is layer 7. Redirect and Fixed response support simplifies the deployment process while leveraging the scale, availability, and reliability of Elastic Load Balancing. Let’s discuss how these latest features work. The new redirect action enables the load balancer to redirect the incoming requests from one URL to another URL. This involves redirecting HTTP requests to HTTPS requests, allowing more secure browsing, better search ranking and high SSL/TLS score for your site. Redirects also help redirect the users from an old version of an application to a new version. The fixed-response actions help control which client requests are served by your applications. This helps you respond to the incoming requests with HTTP error response codes as well as custom error messages from the load balancer. There is no need to forward the request to the application. If you use both redirect and fixed-response actions in your Application Load Balancer, then the customer experience and the security of your user requests are improved considerably. Redirect and fixed-response actions are now available for your Application Load Balancer in all AWS regions. For more details, check out the Elastic Load Balancing documentation page. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] Build an IoT application with AWS IoT [Tutorial]
Read more
  • 0
  • 0
  • 6879

article-image-amazon-s3-retiring-support-path-style-api-requests-sparks-censorship-fears
Fatema Patrawala
06 May 2019
5 min read
Save for later

Amazon S3 is retiring support for path-style API requests; sparks censorship fears

Fatema Patrawala
06 May 2019
5 min read
Last week on Tuesday Amazon announced that Amazon S3 will no longer support path-style API requests. Currently Amazon S3 supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key) and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key). Amazon team mentions in the announcement that, “In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format.” They have also asked customers to update their applications to use the virtual-hosted style request format when making S3 API requests. And this should be done before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format. They have further mentioned that, “Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.” Users on Hackernews see this as a poor development by Amazon and have noted its implications that collateral freedom techniques using Amazon S3 will no longer work. One of them has commented strongly on this, “One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work. To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away. I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development. This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.” Amazon team suggests that if your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, you may reach out to AWS Support. To know more about this news check out the official announcement page from Amazon. Update from Amazon team on 8th May Amazon’s Chief Evangelist for AWS, Jeff Barr sat with the S3 team to understand this change in detail. After getting a better understanding he posted an update on why the team plans to deprecate the path based model. Here’s his comparison on old vs the new: S3 currently supports two different addressing models: path-style and virtual-hosted style. Take a quick look at each one. The path-style model looks either like this (the global S3 endpoint): https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg https://s3.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png Or this (one of the regional S3 endpoints): https://s3-useast2.amazonaws.com/jbarrpublic/images/ritchie_and_thompson_pdp11.jpeg https://s3-us-east-2.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png For example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys. Even though the objects are owned by distinct AWS accounts and are in different S3 buckets and possibly in distinct AWS regions, both of them are in the DNS subdomain s3.amazonaws.com. Hold that thought while we look at the equivalent virtual-hosted style references: https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg https://jeffbarr-public.s3.amazonaws.com/classic_amazon_door_desk.png These URLs reference the same objects, but the objects are now in distinct DNS subdomains (jbarr-public.s3.amazonaws.com and jeffbarr-public.s3.amazonaws.com, respectively). The difference is subtle, but very important. When you use a URL to reference an object, DNS resolution is used to map the subdomain name to an IP address. With the path-style model, the subdomain is always s3.amazonaws.com or one of the regional endpoints; with the virtual-hosted style, the subdomain is specific to the bucket. This additional degree of endpoint specificity is the key that opens the door to many important improvements to S3. The select few in the community are in favor of this as per one of the user comment on Hacker News which says, “Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here https://twitter.com/dvassallo/status/1125549694778691584 thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!” But for the other few Amazon team has failed to address the domain censorship issue as per another user which says, “Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block https://s3.amazonaws.com/tiananmen-square-facts than https://tiananmen-square-facts.s3.amazonaws.com because DNS lookups are made before HTTPS kicks in.” Read about this update in detail here. Amazon S3 Security access and policies 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces S3 batch operations to process millions of S3 objects
Read more
  • 0
  • 0
  • 6738

article-image-oracle-releases-open-source-and-commercial-licenses-for-java-11-and-later
Savia Lobo
13 Sep 2018
3 min read
Save for later

Oracle releases open source and commercial licenses for Java 11 and later

Savia Lobo
13 Sep 2018
3 min read
Oracle announced that it will provide JDK releases in two combinations ( an open source license and a commercial license): Under the open source GNU General Public License v2, with the Classpath Exception (GPLv2+CPE) Under a commercial license for those using the Oracle JDK as part of an Oracle product or service, or who do not wish to use open source software. These combinations will replace the historical BCL(Binary Code License for Oracle Java SE technologies), which had a combination of free and paid commercial terms. The BCL has been the primary license for Oracle Java SE technologies for well over a decade. It historically contained ‘commercial features’ that were not available in OpenJDK builds. However, over the past year, Oracle has contributed features to the OpenJDK Community, which include Java Flight Recorder, Java Mission Control, Application Class-Data Sharing, and ZGC. From Java 11 onwards, therefore, Oracle JDK builds and OpenJDK builds will be essentially identical. Minute differences between Oracle JDK 11 and OpenJDK Oracle JDK 11 emits a warning when using the -XX:+UnlockCommercialFeatures option. On the other hand, in OpenJDK builds this option results in an error. This difference remains in order to make it easier for users of Oracle JDK 10 and earlier releases to migrate to Oracle JDK 11 and later. The javac --release command behaves differently for the Java 9 and Java 10 targets. This is because, in those releases the Oracle JDK contained some additional modules that were not part of corresponding OpenJDK releases. Some of them are: javafx.base javafx.controls javafx.fxml javafx.graphics javafx.media javafx.web This difference remains in order to provide a consistent experience for specific kinds of legacy use. These modules are either now available separately as part of OpenJFX, are now in both OpenJDK and the Oracle JDK because they were commercial features which Oracle contributed to OpenJDK (e.g., Flight Recorder), or were removed from Oracle JDK 11 (e.g., JNLP). The Oracle JDK always requires third party cryptographic providers to be signed by a known certificate. The cryptography framework in OpenJDK has an open cryptographic interface. This means it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. The Oracle JDK has always required third party cryptographic providers to be signed by a known certificate.  The cryptography framework in OpenJDK has an open cryptographic interface, meaning it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. Read more about this news in detail on Oracle blog. State of OpenJDK: Past, Present and Future with Oracle Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java
Read more
  • 0
  • 0
  • 6527

article-image-introducing-grafanas-loki-alpha-a-scalable-ha-multi-tenant-log-aggregator-for-cloud-natives-optimized-for-grafana-prometheus-and-kubernetes
Melisha Dsouza
13 Dec 2018
2 min read
Save for later

Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes

Melisha Dsouza
13 Dec 2018
2 min read
On 11th December, at the KubeCon+CloudNativeCon conference held at Seattle, Graffana labs announced the release of ‘Loki’, which is a horizontally-scalable, highly-available, multi-tenant log aggregation system for cloud natives that was inspired by Prometheus. As compared to other log aggregation systems, Loki does not index the contents of the logs but rather a set of labels for each log stream. Storing compressed, unstructured logs and only indexing metadata, makes it cost effective as well as easy to operate. Users can seamlessly switch between metrics and logs using the same labels that they are already using with Prometheus. Loki can store Kubernetes Pod logs; metadata such as Pod labels is automatically scraped and indexed. Features of Loki Loki is optimized to search, visualize and explore a user's logs natively in Grafana. It is optimized for Grafana, Prometheus and Kubernetes. Grafana 6.0 provides a native Loki data source and a new Explore feature that makes logging a first-class citizen in Grafana. Users can streamline instant response, switch between metrics and logs using the same Kubernetes labels that they are already using with Prometheus. Loki is an open source alpha software with a static binary and no dependencies Loke can be used outside of Kubernetes. But the team says that their r initial use case is “very much optimized for Kubernetes”. With promtail, all Kubernetes labels for a user's logs are automatically set up the same way as in Prometheus. It is possible to manually label log streams, and the team will be exploring integrations to make Loki “play well with the wider ecosystem”. Twitter is buzzing with positive comments for Grafana. Users are pretty excited for this release, complimenting Loki’s cost-effectiveness and ease of use. https://twitter.com/pracucci/status/1072750265982509057 https://twitter.com/AnkitTimbadia/status/1072701472737902592 Head over to Grafana lab’s official blog to know more about this release. Alternatively, you can check out GitHub for a demo on three ways to try out Loki: using Grafana free hosted demo, running it locally with Docker or building from source. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Uber open sources its large scale metrics platform, M3 for Prometheus DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps
Read more
  • 0
  • 0
  • 6441
article-image-cloudflares-workers-enable-containerless-cloud-computing-powered-by-v8-isolates-and-webassembly
Melisha Dsouza
12 Nov 2018
5 min read
Save for later

Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly

Melisha Dsouza
12 Nov 2018
5 min read
Cloudflare’s cloud computing platform Workers doesn’t use containers or virtual machines to deploy computing. Workers allows users to build serverless applications on Cloudflare's data centers. It provides a lightweight JavaScript execution environment to augment existing applications or create entirely new ones without having to configure or maintain infrastructure. Why did Cloudflare create workers? Cloudflare provided limited features and options that developers could build in-house. There was not much flexibility for customers to build features themselves. To enable users to write code on their servers deployed around the world, they had to allow untrusted code to run, with low overhead. This needed to process millions of requests per second and that too at a very fast speed. Customers couldn’t write their own code without the team’s supervision. It would be expensive to use traditional virtualization and container technologies like Kubernetes let alone run thousands of Kubernetes pod at 155 data centers of Cloudflare would be resource intensive. Enter Cloudflare’s ‘Workers’ to solve these issues. Features of Workers #1 ‘Isolates’- Run code from multiple customers ‘Isolates’ is a technology built by Google Chrome team to power the Javascript engine in that browser, V8: Isolates.  These are lightweight contexts that group variables, with the code allowed to mutate them. A single process can run hundreds or thousands of Isolates, while easily  switching between them. Thus, Isolates make it possible to run untrusted code from different customers within a single operating system process. They start real quick (Any given Isolate can start around a hundred times faster than a Node process on a machine) and do not allow one Isolate to access the memory of another. #2 Cold Starts Workers facilitate the concept of ‘cold start’ when a new copy of code has to be started on a machine. In the Lambda world, this means spinning up a new containerized process which can delay requests  for as much as ten seconds ending up in a terrible user experience. A Lambda can only process one single request at a time. A new Lambda has to be cold-started every time an additional concurrent request is recieved. If a Lambda doesn’t get a request soon enough, it will be shut down and it all starts again.  Since Workers don’t have to start a process, Isolates start in 5 milliseconds. It scales and deploys quickly, entirely upgrading existing Serverless technologies. #3 Context Switching A normal context switch performed by an OS can take as much as 100 microseconds. When multiplied by all the Node, Python or Go processes running on average Lambda servers, this leads to a heavy overhead. This splits the CPUs power between running the customer’s code and switching between processes. An Isolate-based system runs all of the code in a single process which means there are no expensive context switches. The machine can invest virtually all of its time running your code. #4 Memory The V8 was designed to be multi-tenant. It runs the code from the many tabs in a user’s browser in isolated environments within a single process. Since memory is often the highest cost of running a customer’s code, V8 lowers it and dramatically changes the cost economics. #5 Security It is not safe to run code from multiple customers within the same process. Testing, fuzzing, penetration testing, and bounties are required to build a truly secure system of that complexity. The open-source nature of V8 helps in creating aanisolation layer that helps Cloudflare take care of the security aspect. Cloudlfare’s Workers also allows users to build responses from multiple background service requests either to the Cloudflare cache, application origin, or third party APIs. They can build conditional responses for inbound requests to assess and subsequently block or reroute malicious or unauthorized requests. All of this at just a third of what AWS costs, remarked an astute Twitter observer. https://twitter.com/seldo/status/1061461318765555713 Running code through WebAssembly One of the disadvantages of using Workers is that, since it is an Isolate-based system, it cannot run arbitrary compiled code. Users have to either write their code in Javascript, or a language which targets WebAssembly (eg. Go or Rust). Also, if a user cannot recompile their processes, they won’t be able to run them in an Isolate. This has been nicely summarised in the above mentioned tweet. He notes that WebAssembly modules are already in the npm registry and it creates the potential for npm to become the dependency management solution for every programming language. He mentions that the “availability of open source libraries to achieve the task at hand is the primary reason people pick a programming language”. This leads us to the question of “How does software development change when you can use any library anytime?” You can head over to the Cloudflare blog to understand more about containerless cloud computing. Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites  
Read more
  • 0
  • 0
  • 6136

article-image-azure-functions-3-0-released-with-support-for-net-core-3-1
Savia Lobo
12 Dec 2019
2 min read
Save for later

Azure Functions 3.0 released with support for .NET Core 3.1!

Savia Lobo
12 Dec 2019
2 min read
On 9th December, Microsoft announced that the go-live release of the Azure Functions 3.0 is now available. Among many new capabilities and functionality added to this release, one amazing addition is the support for the newly released .NET Core 3.1 -- an LTS (long-term support) release -- and Node 12. With users having the advantage to build and deploy 3.0 functions in production, the Azure Functions 3.0 bring newer capabilities including the ability to target .NET Core 3.1 and Node 12, higher backward compatibility for existing apps running on older language versions, without any code changes. “While the runtime is now ready for production, and most of the tooling and performance optimizations are rolling out soon, there are still some tooling improvements to come before we announce Functions 3.0 as the default for new apps. We plan to announce Functions 3.0 as the default version for new apps in January 2020,” the official announcement mentions. While users running on earlier versions of Azure Functions will continue to be supported, the company does not plan to deprecate 1.0 or 2.0 at present. “Customers running Azure Functions targeting 1.0 or 2.0 will also continue to receive security updates and patches moving forward—to both the Azure Functions runtime and the underlying .NET runtime—for apps running in Azure. Whenever there’s a major version deprecation, we plan to provide notice at least a year in advance for users to migrate their apps to a newer version,” Microsoft mentions. https://twitter.com/rickvdbosch/status/1204115191367114752 https://twitter.com/AzureTrenches/status/1204298388403044353 To know more about this in detail, read Azure Functions’ official documentation. Creating triggers in Azure Functions [Tutorial] Azure Functions 2.0 launches with better workload support for serverless Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 6132

article-image-serverless-computing-101
Guest Contributor
09 Feb 2019
5 min read
Save for later

Serverless Computing 101

Guest Contributor
09 Feb 2019
5 min read
Serverless applications began gaining popularity when Amazon launched AWS Lambda back in the year 2014. Since then, we are becoming more familiar with Serverless Computing as it is exponentially growing in use and reference among the vendors who are entering the markets with their own solutions. The reason behind the hype of serverless computing is it requires no infrastructure management which is a modern approach for the enterprise to lessen up the workload. What is Serverless Computing? It is a special kind of software architecture which executes the application logic in an environment without visible processes, operating systems, servers, and virtual machines. Serverless Computing is also responsible for provisioning and managing the infrastructure entirely by the service provider. Serverless defines a cloud service that abstracts the details of the cloud-based processor from its user; this does not mean servers are no longer needed, but they are not user-specified or controlled. Serverless computing refers to serverless architecture which relates to the applications that depend on a third-party service (BaaS) and container (FaaS). Image Source: Tatvasoft The top serverless computing providers like Amazon, Microsoft, Google and IBM provide serverless computing like FaaS to companies like NetFlix, Coca-cola, Codepen and many more. FaaS Function as a Service is a mode of cloud computing architecture where developers write business logic functions or java development code which are executed by the cloud providers. In this, the developers can upload loads of functionality into the cloud that can be independently executed. The cloud service provider manages everything from execution to scaling it automatically. Key components of FaaS: Events - Something that triggers the execution of the function is regarded as an event. For instance: Uploading a file or publishing a message. Functions - It is regarded as an independent unit of deployment. For instance: Processing a file or performing a scheduled task. Resources - Components used by the function is defined as resources. For instance: File system services or database services. BaaS Backend as a Service allows developers to write and maintain only the frontend of the application and enable them by using the backend service without building and maintaining them. The BaaS service providers offer in-built pre-written software activities like user authentication, database management, remote updating, cloud storage and much more. The developers do not have to manage servers or virtual machines to keep their applications running which helps them to build and launch applications more quickly. Image courtesy - Gallantra Use-Cases of Serverless Computing Batch jobs scheduled tasks: Schedules the jobs that require intense parallel computation, IO or network access. Business logic: The orchestration of microservice workloads that execute a series of steps for applying your ideas. Chatbots: Helps to scale at peak demand times automatically. Continuous Integration pipeline: It has the ability to remove the need for pre-provisioned hosts. Captures Database change: Auditing or ensuring modifications in order to meet quality standards. HTTP REST APIs and Web apps: Sends traditional request and gives a response to the workloads. Mobile Backends: Can build on the REST API backend workload above the BaaS APIs. Multimedia processing: To execute a transformational process in response to a file upload by implementing the functions. IoT sensor input messages: Receives signals and scale in response. Stream processing at scale: To process data within a potentially infinite stream of messages. Should you use Serverless Computing? Merits Fully managed services - you do not have to worry about the execution process. Supports event triggered approach - sets the priorities as per the requirements. Offers Scalability - automatically handles load balancing. Only pay for Execution time - you need to pay just for what you used. Quick development and deployment - helps to run infinite test cases without worrying about other components. Cut-down time-to-market - you can look at your refined product in hours after creating it. Demerits Third-party dependency - developers have to depend on cloud service providers completely. Lacking Operational tools - need to depend on providers for debugging and monitoring devices. High Complexity - takes more time and it is difficult to manage more functions. Functions cannot stay for a longer period - only suitable for applications having shorter processes. Limited mapping to database indexes - challenging to configure nodes and indexes. Stateless Functions - resources cannot exist within a function after the function stops to exit. Serverless computing can be seen as the future for the next generation of cloud-native and is a new approach to write and deploy applications that allow developers to focus only on the code. This approach helps to reduce the time to market along with the operational costs and system complexity. Third-party services like AWS Lambda has eliminated the requirement to set up and configure physical servers or virtual machines. It is always best to take an expert's advice that holds years of experience in software development with modern technologies. Author Bio: Working as a manager in a Software outsourcing company Tatvasoft.com, Vikash Kumar has a keen interest in blogging and likes to share useful articles on Computing. Vikash has also published his bylines on major publication like Kd nuggets, Entrepreneur, SAP and many more. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications
Read more
  • 0
  • 0
  • 6070
article-image-opensky-is-now-a-part-of-the-alibaba-family
Bhagyashree R
06 Sep 2018
2 min read
Save for later

OpenSky is now a part of the Alibaba family

Bhagyashree R
06 Sep 2018
2 min read
Yesterday, Chris Keane, the General Manager of OpenSky announced that OpenSky is now acquired by the Alibaba Group. OpenSky is a network of businesses that empower modern global trade for SMBs and help people discover, buy, and share unique goods that match their individual taste. OpenSky will join Alibaba Group in two capacities: One of OpenSky’s team will become a part of Alibaba.com in North America B2B to serve US based buyers and suppliers. The other team will become a wholly-owned subsidiary of Alibaba Group consisting of OpenSky’s marketplace and SaaS businesses. In 2015, Alibaba Group acquired a minority ownership on OpenSky. In 2017, they collaborated with Alibaba’s B2B leadership team to solve the challenges faced by small businesses. According to Chris, both the companies share a common interest, which is to help small businesses: “It was thrilling to discover that our counterparts at Alibaba share our obsession with helping SMBs. We’ve quickly aligned on a global vision to provide access to markets and resources for businesses and entrepreneurs, opening new doors and knocking down obstacles.” In this announcement Chris also mentioned that they will be coming up with powerful concepts to serve small businesses everywhere, in the near future. To know more, read the official announcement on LinkedIn. Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Digitizing the offline: How Alibaba’s FashionAI can revive the waning retail industry Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 5921

article-image-cncf-sandbox-accepts-googles-openmetrics-project
Fatema Patrawala
14 Aug 2018
3 min read
Save for later

CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project

Fatema Patrawala
14 Aug 2018
3 min read
The Cloud Native Computing Foundation (CNCF) accepted OpenMetrics, an open source specification for metrics exposition, into the CNCF Sandbox, a home for early stage and evolving cloud native projects. Google cloud engineers and other vendors had been working on this persistently from the past several months and finally it got accepted by CNCF. Engineers are further working on ways to support OpenMetrics in the OpenSensus, a set of uniform tracing and stats libraries that work with multi-vendor services. OpenMetrics will bring together the maturity and adoption of Prometheus, and Google’s background in working with stats at extreme scale. It will also bring in the experience and needs of a variety of projects, vendors, and end-users who are aiming to move away from the hierarchical way of monitoring to enable users to transmit metrics at scale. The open source initiative, focused on creating a neutral metrics exposition format will provide a sound data model for current and future needs of users. It will embed into a standard that is an evolution of the widely-adopted Prometheus exposition format. While there are numerous monitoring solutions available today, many do not focus on metrics and are based on old technologies with proprietary, hard-to-implement and hierarchical data models. “The key benefit of OpenMetrics is that it opens up the de facto model for cloud native metric monitoring to numerous industry leading implementations and new adopters. Prometheus has changed the way the world does monitoring and OpenMetrics aims to take this organically grown ecosystem and transform it into a basis for a deliberate, industry-wide consensus, thus bridging the gap to other monitoring solutions like InfluxData, Sysdig, Weave Cortex, and OpenCensus. It goes without saying that Prometheus will be at the forefront of implementing OpenMetrics in its server and all client libraries. CNCF has been instrumental in bringing together cloud native communities. We look forward to working with this community to further cloud native monitoring and continue building our community of users and upstream contributors.” says Richard Hartmann, Technical Architect at SpaceNet, Prometheus team member, and founder of OpenMetrics. OpenMetrics contributors include AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig and Uber, among others. “Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions. In addition to our contributions to the spec, we’ll be enabling OpenMetrics support in OpenCensus” says Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google. For more information about OpenMetrics, please visit openmetrics.io. To quickly enable trace and metrics collection from your application, please visit opencensus.io. 5 reasons why your business should adopt cloud computing Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 5679