Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-cncf-releases-9-security-best-practices-for-kubernetes-to-protect-a-customers-infrastructure
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure

Melisha Dsouza
15 Jan 2019
3 min read
According to CNCF’s bi-annual survey conducted in August 2018, 83% of the respondents prefer Kubernetes for its container management tools. 58% of respondents use Kubernetes in production, while 42% are evaluating it for future use and 40% of enterprise companies (5000+) are running Kubernetes in production. These statistics give us a clear picture of the popularity of Kubernetes amongst developers as a container orchestrator. However, the recent security flaw discovered in Kubernetes (now patched) that enable attackers to compromise clusters and perform illicit activities, did raise concerns among developers. A container environment like Kubernetes consisting of multiple layers needs to be secured on all fronts. Taking this into consideration, the cncf has released ‘9 Kubernetes Security Best Practices Everyone Must Follow’ #1 Upgrade to the Latest Version Kubernetes has a quarterly update that features various bug and security fixes. Customers are advised to always upgrade to the latest release with updated security patches to fool proof their system. #2 Role-Based Access control (RBAC) Users can control who can access the Kubernetes API and what permissions they have by enabling the RBAC. The blog advises users against giving anyone cluster admin privileges and to grant access only as needed on a case-by-case basis. #3 Namespaces for security boundaries Namespaces generate an important level of isolation between components. Also, cncf states that it is easier to have various security controls and policies when workloads are deployed in separate namespaces #4 Keeping sensitive workloads separate Sensitive workloads should be run on a dedicated set of machines. This means that if a less secure application connected to a sensitive workload is compromised, the latter remains unaffected. #5 Securing Cloud Metadata Access Sensitive metadata storing confidential information such as credentials, can be stolen and misused. The blog advises users to use Google Kubernetes Engine’s metadata concealment feature to avoid this mishap. #6 Cluster Network Policies Developers will be able to control network access of their containerized applications through network policies. #7 Implementing a Cluster-wise Pod Security Policy This will define how workloads are allowed to run in a cluster. #8 Improve Node Security Users should ensure that the host is configured in the right way and that it is secure by checking the node’s configuration against CIS benchmarks. Ensure your network blocks access to ports that can be exploited by malicious actors and minimize the administrative access given to Kubernetes nodes. #9 Audit Logging Audit logs should be enabled and monitored for anomalous API calls and authorization failures. This an indicate that a malicious hacker is trying to get into your system. The blog advises users to further look for tools to assist them in continuous monitoring and protection of their containers.  You can head over to Cloud Native computing foundation official blog to read more about these best practices. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes      
Read more
  • 0
  • 0
  • 3748

article-image-tumblr-open-sources-its-kubernetes-tools-for-better-workflow-integration
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

Tumblr open sources its Kubernetes tools for better workflow integration

Melisha Dsouza
15 Jan 2019
3 min read
Yesterday, Tumblr announced the open sourcing of three tools developed at Tumblr itself, that will help developers integrate Kubernetes into their workflows. These tools were developed by Tumblr throughout their eleven-year journey to migrate their workflow to Kubernetes with ease. These are the 3 tools and their features as listed on the Tumblr blog: #1 k8s- sidecar injector Containerizing complex applications can be time-consuming. Sidecars come as a savior option, that allows developers to emulate older deployments with co-located services on Virtual machines or physical hosts. The k8s-sidecar injector dynamically injects sidecars, volumes, and environment data into pods as they are launched. This reduced the overhead and work involved in copy-pasting code to add sidecars to a developer's deployments and cronjobs. What’s more, the tool listens to the specific sidecar to be injected, contained within the Kubernetes API for Pod launch. This tool will be useful when containerizing legacy applications requiring a complex sidecar configuration. #2 k8s-config-projector The k8s-config projector is a command line tool that was generated out of the necessity of accessing a subset of configuration data (feature flags, lists of hosts/IPs+ports, and application settings) and a need to be informed as soon as this data changes. Config data defines how deployed services operate at Tumblr. Kubernetes ConfigMap resource enables users to provide their service with configuration data. It also allows them to update the data in running pods without redeployment of the application. To use this feature to configure Tumblr’s services and jobs in a Kubernetes-native manner, the team had to bridge the gap between their canonical configuration store (git repo of config files) to ConfigMaps. k8s-config-projector combines the git repo hosting configuration data with “projection manifest” files, that describe how to group/extract settings from the config repo and transmute them into ConfigMaps. Developers can now encode a set of configuration data that the application needs to run into a projection manifest. The blog states that ‘as the configuration data changes in the git repository, CI will run the projector, projecting and deploying new ConfigMaps containing this updated data, without needing the application to be redeployed’. #3 k8s-secret-projector Tumblr stores secure credentials (passwords, certificates, etc) in access controlled vaults. With k8s-secret-projector tool, developers will now be able to request access to subsets of credentials for a given application. This can be done now without granting the user access to the secrets as a whole. The tool ensures applications always have the appropriate secrets at runtime, while enabling automated systems including certificate refreshers, DB password rotations, etc to automatically manage and update these credentials, without the need to redeploy/restart the application. It performs the same by combining two repositories- projection manifests and credential repositories. A Continuous Integration (CI) tool like Jenkins will run the tool against any changes in the projection manifests repository. This will generate new Kubernetes Secret YAML files which will lead to the Continuous Deployment to deploy the generated and validated Secret files to any number of Kubernetes clusters. The tool will allow secrets to be deployed in Kubernetes environments by encrypting generated Secrets before they touch the disk. You can head over to Tumblr’s official blog for examples on each tool. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes
Read more
  • 0
  • 0
  • 4661

article-image-amazon-is-reportedly-building-a-video-game-streaming-service-says-information
Sugandha Lahoti
14 Jan 2019
2 min read
Save for later

Amazon is reportedly building a video game streaming service, says Information

Sugandha Lahoti
14 Jan 2019
2 min read
According to a report by Information, Amazon is developing a video game streaming service. Microsoft and Google have also previously announced similar game streaming offerings. In October, Google announced a new experimental game streaming service, namely, Project Stream. In the same month, Microsoft’s gaming chief Phil Spencer confirmed a streaming game service for any device at the E3 conference called the Project X Cloud. Amazon’s idea is to potentially bring top gaming titles to virtually anyone with a smartphone or streaming device. The service will handle all the compute-intensive calculations needed to run graphics-intensive games in the cloud. It would then stream them directly into a smart device so that gamers can get the same experience as running the titles natively on a high-end gaming system. Information says that although the Amazon gaming service isn’t likely to be launched until next year, Amazon has begun talking to games publishers about distributing their titles through its service. Most likely, this initiative would succeed considering Amazon is the biggest player in the cloud market. Amazon currently owns 32 percent of the cloud market, compared with Microsoft Azure’s 17 percent and Google Cloud’s 8 percent. These make better chances for Amazon to succeed. This would make it easier for gamers to take advantage of Amazon’s vast cloud offerings and play elaborate, robust games even on their mobile devices As the Information noted, a successful streaming platform may possibly overcome the long-standing business model of the gaming world, in which customers pay out $50 to $60 for a Triple-A title. Amazon is yet to shell out the details of such a video gaming service officially. Check out the full report on The Information. Microsoft announces Project xCloud, a new Xbox game streaming service Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service. Corona Labs open sources Corona, its free and cross-platform 2D game engine
Read more
  • 0
  • 0
  • 2848

article-image-black-hat-hackers-used-ipmi-cards-to-launch-junglesec-ransomware-affects-most-of-the-linux-servers
Savia Lobo
10 Jan 2019
3 min read
Save for later

Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers

Savia Lobo
10 Jan 2019
3 min read
Unsecured IPMI (Intelligent Platform Management Interface) cards are preparing a gateway for the JungleSec ransomware that affected multiple Linux servers. The ransomware attack was originally reported in early November 2018. Victims were seen using the Windows, Linux, and Mac; however, there were no traces of how they were being infected. The Black Hat hackers have been using the IPMI cards to breach access and install the JungleSec ransomware, which encrypts data and demands a 0.3 bitcoin payment (about $1,100) for the unlock key. IPMI, a management interface, is built into server motherboards or installed as an add-on card. This enables administrators to remotely manage the computer, power on and off the computer, get system information, and get access to a KVM that gives one remote console access. The IPMI is also useful for managing servers, especially when renting servers from another company at a remote collocation center. However, if the IPMI interface is not properly configured, it could allow attackers to remotely connect to and take control of servers using default credentials. Bleeping Computers said they have “spoken to multiple victims whose Linux servers were infected with the JungleSec Ransomware and they all stated the same thing; they were infected through unsecured IPMI devices”. Bleeping Computers first reported this story on Dec 26 indicating that the hack only affected Linux servers. The attackers installed the JungleSec ransomware through the server's IPMI interface. In the conversations that Bleeping computers had with two of the victims, one victim said, “that the IPMI interface was using the default manufacturer passwords.” The other victim stated that “the Admin user was disabled, but the attacker was still able to gain access through possible vulnerabilities.” Once the attackers were successful in gaining access to the servers, the attackers would reboot the computer into single user mode in order to gain root access. Once in single user mode, they downloaded and compiled the ‘ccrypt’ encryption program. In order to secure the IPMI interface, the first step is to change the default password as most of these cards come with default passwords Admin/Admin. “Administrators should also configure ACLs that allow only certain IP addresses to access the IPMI interface. In addition, IPMI interfaces should be configured to only listen on an internal IP address so that it is only accessible by local admins or through a VPN connection”, Bleeping computer reports. The report also includes a tip from Negulescu--not specific to IPMI interfaces--which suggests adding a password to the GRUB bootloader. Doing so will make it more difficult, if not impossible, to reboot into single user mode from the IPMI remote console. To know more about this news in detail head over to Bleeping Computers’ complete coverage. Go Phish! What do thieves get from stealing our data? Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks
Read more
  • 0
  • 0
  • 4362

article-image-tls-comes-to-google-public-dns-with-support-for-dns-over-tls-connections
Prasad Ramesh
10 Jan 2019
2 min read
Save for later

TLS comes to Google public DNS with support for DNS-over-TLS connections

Prasad Ramesh
10 Jan 2019
2 min read
In a blog post yesterday, Google announced that their public DNS will now support transport layer security (TLS). Google DNS Google’s public Domain Name Service (DNS) is the world’s largest address resolver. The service allows anyone using it to convert a human readable domain name into addresses used by browsers. Similar to search results, domains visited by DNS can also expose sensitive information. With DNS-over-TLS, users can add security to queries between devices and Google public DNS. Google DNS-over-TLS The need for security from forged websites and surveillance has grown over the years. The DNS-over-TLS protocol used contains a standard way to secure and maintain privacy of DNS traffic between users and the resolvers. Users can secure connections to Google Public DNS with TLS. It is the same technology that makes HTTPS connections secure. The DNS-over-LTS specifications are implemented according to the RFC 7766 recommendations. Doing so minimizes the overhead of using TLS, supports TLS 1.3, TCP fast open, and pipelining multiple queries over a single connection. This is deployed Google’s own infrastructure which they claim provides reliable and scalable management for the DNS-over-TLS connections. Enabling DNS-over-TLS connections DNS-over-TLS can be used by Android 9 pie users. Linux users can use the stubby resolver to communicate with the DNS-over-TLS service. You can create an issue if you are facing one. A comment from Hacker news says: “This is a DNS provided by Google, a company that earns money by analysing user data. If you want privacy, run your own DNS.” But Google has stated in their guides that they do not store any personally identifiable information long term. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Andro Root Zone KSK (Key Sign Key) Rollover to resolve DNS queries was successfully completed Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 2006

article-image-triggermesh-announces-open-source-knative-lambda-runtime-aws-lambda-functions-can-now-be-deployed-on-knative
Melisha Dsouza
10 Jan 2019
2 min read
Save for later

TriggerMesh announces open source ‘Knative Lambda Runtime’; AWS Lambda functions can now be deployed on Knative!

Melisha Dsouza
10 Jan 2019
2 min read
"We believe that the key to enabling cloud native applications, is to provide true portability and communication across disparate cloud infrastructure." Mark Hinkle, co-founder of TriggerMesh Yesterday, TriggerMesh- the open source multi-cloud service management platform- announced their open source project ‘Knative Lambda Runtime’ (TriggerMesh KLR). KLR will bring AWS Lambda serverless computing to Kubernetes which will enable users to run Lambda functions on Knative-enabled clusters and serverless clouds. Amazon Web Services' (AWS) Lambda for serverless computing can only be used on AWS and not on another cloud platform. TriggerMesh KLR changes the game completely as now, users can avail complete portability of Amazon Lambda functions to Knative native enabled clusters, and Knative enabled serverless cloud infrastructure “without the need to rewrite these serverless functions”. [box type="shadow" align="" class="" width=""]Fun fact: KLR is pronounced as ‘clear’[/box] Features of TriggerMesh Knative Lambda Runtime Knative is a  Google Cloud-led Kubernetes-based platform which can be used to build, deploy, and manage modern serverless workloads. KLR are Knative build templates that can be used to runan AWS Lambda function in a Kubernetes cluster as is in a Knative powered Kubernetes cluster (installed with Knative). KLR enables serverless users to move functions back and forth between their Knative and AWS Lambda. AWS  Lambda Custom Runtime API in combination with the Knative Build system makes deploying KLR possible. Serverless users have shown a positive response to this announcement, with most of them excited for this news. Kelsey Hightower, developer advocate, Google Cloud Platform, calls this news ‘dope’ and we can understand why! His talk at KubeCon+CloudNativeCon 2018 had focussed on serveless and its security aspects. Now that AWS Lambda functions can be run on Google’s Knative, this marks a new milestone for TriggerMesh. https://twitter.com/kelseyhightower/status/1083079344937824256 https://twitter.com/sebgoa/status/1083014086609301504 It would be interesting to see how this moulds the path to a Kubernetes hybrid-cloud model. Head over to TriggerMesh’s official blog for more insights to this news. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes  
Read more
  • 0
  • 0
  • 3025
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gnu-bash-5-0-is-here-with-new-features-and-improvements
Natasha Mathur
08 Jan 2019
2 min read
Save for later

Bash 5.0 is here with new features and improvements

Natasha Mathur
08 Jan 2019
2 min read
GNU project made version 5.0 of its popular POSIX shell Bash ( Bourne Again Shell) available yesterday. Bash 5.0 explores new improvements and features such as BASH_ARGV0, EPOCHSECONDS, and EPOCHREALTIME among others. Bash was first released in 1989 and was created for the GNU project as a replacement for their Bourne shell. It is capable of performing functions such as interactive command line editing, and job control on architectures that support it. It is a complete implementation of the IEEE POSIX shell and tools specification. Key Updates New features Bash 5.0 comes with a newly added EPOCHSECONDS variable, which is capable of expanding to the time in seconds. There is another newly added EPOCHREALTIME variable which is similar to EPOCHSECONDS in Bash 5.0. EPOCHREALTIME is capable of obtaining the number of seconds since the Unix Epoch, the only difference being that this variable is a floating point with microsecond granularity. BASH_ARGV0 is also a newly added variable in Bash 5.0 that expands to $0 and sets $0 on assignment. There is a newly defined config-top.h in Bash 5.0. This allows the shell to use a static value for $PATH. Bash 5.0 has a new shell option that can enable and disable sending history to syslog at runtime. Other Changes The `globasciiranges' option is now enabled by default in Bash 5.0 and can be set to off by default at configuration time. POSIX mode is now capable of enabling the `shift_verbose' option. The `history' builtin option in Bash 5.0 can now delete ranges of history entries using   `-d start-end'. A change that caused strings containing + backslashes to be flagged as glob patterns has been reverted in Bash 5.0. For complete information on bash 5.0, check out its official release notes. GNU ed 1.15 released! GNU Bison 3.2 got rolled out GNU Guile 2.9.1 beta released JIT native code generation to speed up all Guile programs
Read more
  • 0
  • 0
  • 10790

article-image-internet-governance-project-igp-survey-on-ipv6-adoption-initial-reports
Prasad Ramesh
07 Jan 2019
3 min read
Save for later

Internet governance project (IGP) survey on IPV6 adoption, initial reports

Prasad Ramesh
07 Jan 2019
3 min read
The Internet Governance Project (IGP) did some research last year to understand the factors affecting decisions of network operators for IPV6 adoption. The study was done by Georgia tech’s IGP in collaboration with the Internet Corporation for Assigned Names and Numbers (ICANN) office. A study was commissioned as both IGP and ICANN believed that the internet community needs a better understanding of the motives to upgrade IPV4 to IPV6. The study titled The Hidden Standards War: Economic Factors Affecting IPv6 Deployment should be out this month. IPV6 is a different type of internet protocol with a larger address space. As IPV4 addresses are limited, about 4 billion, they may get depleted in the future. Hence IPV6 adoption will happen sometime. it can hold 2^128 addresses which is more than enough for the foreseeable distant future. IPV6 addresses are also longer than IPV4 and contain both numbers and letters in a hexadecimal form. Initial results of the study The report by IGP is still in the draft stage but they have shared some initial findings. It was found that IPV6 is not going to be disregarded completely after all. Especially in mobile networks where both the hardware and the software support the use of IPV6. Although IPV6 capability is mostly turned off due to lack of compatibility, it still remains. The initial findings show that 79% of the countries, a total of 169, did not have any noteworthy IPV6 deployment. The deployment percentage remained at or even below 5% when the study was conducted last year. 12% of the countries summing up to 26 had an increasing deployment. 8% or 18 countries had shown a plateau in growth where IPv6 capability growth stopped between 8% and 59%. Why the slow adoption? They say that it is all about the costs and benefits associated with upgrading. As economic incentives were investigated, it was found that there is no real need for operators to actually upgrade their hardware. No one uses IPv6 exclusively, as all public and almost all private network service providers have to offer full compatibility. With this condition in place, operators have only three choices: Stick to IPv4 Implement dual stack and provide both Run IPv6 where compatible and run some tunneling for IPv4 compatibility. To move towards IPv6, dual stack is not economical. The third option seems to be the only viable one. There are no benefits for the operators to shift to IPv6. Even if one operator migrated, it puts no pressure on the others to shift. The network operators exclusively bear the maintenance costs. Hence, a wealthier country can deploy more IPv6 networks. Even though it was introduced in 1994, a big problem for forwarding adoption is that IPv6 is incompatible with IPv4. IPv6 adoption can make sense if a network needs to grow, but most networks don’t need to grow. Hence, instead of buying new hardware/software to run IPv6, operators would rather just buy new IPv4 addresses as they are cheaper. Bottom line is, there is no considerable incentive to make a move to change protocol until the remaining IPv4 pool in near depletion. IPv6 support to be automatically rolled out for most Netify Application Delivery Network users Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! 5G – Trick or Treat?
Read more
  • 0
  • 0
  • 3292

article-image-liz-fong-jones-reveals-she-is-leaving-google-in-february
Richard Gall
03 Jan 2019
2 min read
Save for later

Liz Fong-Jones reveals she is leaving Google in February

Richard Gall
03 Jan 2019
2 min read
Liz Fong-Jones has been a key figure in the politicization of Silicon Valley over the last 18 months. But the Developer Advocate at Google Cloud Platform revealed today (3rd January 2018) that she is to leave the company in February, citing Google's lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Fong-Jones hinted that she had found another role before Christmas, writing on Twitter that she had found a new job: https://twitter.com/lizthegrey/status/1075837650433646593 That was confirmed today when Fong-Jones tweeted "Resignation letter is in. February 25 is my last day." Her new role hasn't yet been revealed, but it appears that she will be remain within SRE. She told one follower that she will likely be at SRECon in Dublin later in the year. https://twitter.com/lizthegrey/status/1080837397347221505 She made it clear that she had no issue with her team, stating that her decision to leave was instead "a reflection on what Google's become over the 11 years I've worked there." Why Liz Fong-Jones exit from Google is important Fong-Jones exit from Google doesn't reflect well on the company. If anything, it only serves to highlight the company's stubbornness. Despite months to respond to serious allegations of sexual harassment and systemic discrimination, there appears to be a refusal to acknowledge problems, let alone find a way forward to tackle them. From Fong-Jones perspective, it the move is probably as much pragmatic as it is symbolic. She spoke on Twitter of "burnout" at "doing what has to be done, as second shift work." https://twitter.com/lizthegrey/status/1080848586135560192 While there are clearly personal reasons for Fong-Jones to leave Google, because of her importance as a figure in conversations around tech worker rights and diversity, her exit will have significant symbolic power. It's likely that she'll continue to play an important part in helping tech workers - in Silicon Valley and elsewhere - organize for a better future, even as she aims to do "more of what you want to do".
Read more
  • 0
  • 0
  • 7007

article-image-confluent-an-apache-kafka-service-provider-adopts-a-new-license-to-fight-against-cloud-service-providers
Natasha Mathur
26 Dec 2018
4 min read
Save for later

Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers

Natasha Mathur
26 Dec 2018
4 min read
A common trend of software firms limiting their software licenses to prevent cloud service providers from exploiting their open source code is all the rage these days. One such software firm to have joined this move is Confluent, an Apache Kafka service provider, who announced its new Confluent Community License, two weeks back. The new license is aimed at allowing users to download, modify and redistribute the code without letting them provide the software as a system as a service (SaaS). “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-a-service offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions”, says Jay Kreps, CEO, Confluent. The new license, however, will have no effect on Apache Kafka that remains under the Apache 2.0 license and Confluent will continue to contribute to it. Kreps pointed out that leading cloud providers such as Amazon, Microsoft, Alibaba, and Google, today, are all different in the way that they approach open source. Some of these major cloud providers partner up with the open source companies offering hosted versions of their SaaS. Then, there are other cloud providers that take the open source code, implement it into their cloud offering, and then further push all of their investments into differentiated proprietary offerings. For instance, Michael Howard, CEO, MariaDB Corp. called Amazon’s tactics “the worst behavior”  that she’s seen in the software industry due to a loophole in its licensing. Howard also mentioned that the cloud giant is “strip mining by exploiting the work of a community of developers who work for free”, as first reported by Silicon Angle. Kreps suggests a solution, that open source software firms should focus on building more proprietary software and should “pull back” from their open source investments. “But we think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change”, mentions Kreps. The Confluent license change move was followed by MongoDB, who switched to Server Side Public License (SSPL) this October in order to prevent these major cloud providers from misusing its open source code. This decision by MongoDB to change its software license was sparked by the fact these cloud vendors who are not responsible for the development of a software “captures all the value” for the developed software without contributing back much to the community. Another reason was that many cloud providers started to take MongoDB’s open-source code in order to offer a hosted commercial version of its database without following the open-source rules. The license change helps create “an incredible opportunity to foster a new wave of great open source server-side software”, said Eliot Horowitz, CTO, and co-founder, MongoDB. Horowitz also said that he hopes the change would “protect open source innovation”. MongoDB had followed the path of “Common Clause” license that was first adopted by Redis Labs. Common Clause started out as an initiative by a group of top software firms to protect their rights. Common Clause has been added to existing open source software licenses in order to develop a new and combined software license. The combined license puts a limit on the commercial sale of the software. All of these efforts by these companies are aimed at making sure that open source communities do not get taken advantage of by the leading cloud providers. As Kreps point out, “We think this is a positive change and one that can help ensure small open source communities aren’t acting as free and unsustainable R&D (Research & development) for tech giants that put sustaining resources only into their own differentiated proprietary offerings”. Neo4j Enterprise Edition is now available under a commercial license GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 4546
article-image-google-cloud-releases-a-beta-version-of-sparkr-job-types-in-cloud-dataproc
Natasha Mathur
21 Dec 2018
2 min read
Save for later

Google Cloud releases a beta version of SparkR job types in Cloud Dataproc

Natasha Mathur
21 Dec 2018
2 min read
Google released a beta version of SparkR jobs on Cloud Dataproc, a cloud service that lets you run Apache Spark and Apache Hadoop in a cost-effective manner, earlier this week. SparkR Jobs will build R support on GCP. It is a package that delivers a lightweight front-end to use Apache Spark from R. This new package supports distributed machine learning using MLlib. It can be used to process against large cloud storage datasets and for performing work that is computationally intensive. Moreover, this new package also allows the developers to use “dplyr-like operations” i.e. a powerful R-package, which transforms and summarizes tabular data with rows and columns on datasets stored in Cloud Storage. The R programming language is very efficient when it comes to building data analysis tools and statistical apps. With cloud computing all the rage, even newer opportunities have opened up for developers working with R. Using GCP’s Cloud Dataproc Jobs API, it gets easier to submit SparkR jobs to a cluster without any need to open firewalls for accessing web-based IDEs or SSH onto the master node. With the API, it is easy to automate the repeatable R statistics that users want to be running on their datasets. Additionally, GCP for R also helps avoid the infrastructure barriers that put a limit on understanding data. This includes selecting datasets that need to be sampled due to compute or data size limits. GCP also allows you to build large-scale models that help analyze the datasets of sizes that would previously require big investments in high-performance computing infrastructures. For more information, check out the official Google Cloud blog post. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices
Read more
  • 0
  • 0
  • 3819

article-image-kong-1-0-is-now-generally-available-with-grpc-support-updated-database-abstraction-object-and-more
Amrata Joshi
21 Dec 2018
4 min read
Save for later

Kong 1.0 is now generally available with gRPC support, updated Database abstraction object and more

Amrata Joshi
21 Dec 2018
4 min read
Yesterday, the team at Kong announced the general availability of Kong 1.0, a scalable, fast, open source microservice API gateway that manages hybrid and cloud-native architectures. Kong can be extended through plugins including authentication, traffic control, observability and more.The first stable version of Kong 1.0 was  launched earlier this year in September at the Kong summit. The Kong API  creates a Certificate authority which Kong nodes can use for establishing mutual TLS authentication with each other. It can balance traffic from mail servers and other TCP-based applications, from L7 to L4. What’s new in Kong 1.0? gRPC This release supports gRPC protocol alongwith REST. It is built on top of HTTP/2 and provides option for Kong users looking to connect east-west traffic with low overhead and latency. This helps in enabling Kong users to open more mesh deployments in hybrid environments. New Migrations Framework in Kong 1.0 This version of Kong introduces a new Database Abstraction Object (DAO), a framework that allows migrations from one database schema to another with nearly zero downtime. The new DAO helps users to upgrade their Kong cluster all at once, without the need of any manual intervention for upgrading each node. Plugin Development Kit (PDK) PDK, a set of Lua functions and variables can be used by custom-plugins for implementing logic on Kong. The plugins built with the PDK will be compatible with Kong versions 1.0 and above. PDK’s interfaces are much easier to use than the bare-bones ngx_lua API. It allows users to isolate plugin operations such as logging or caching. It is semantically versioned which helps in maintaining backward compatibility. Service Mesh Support Users can now easily deploy Kong as a standalone service mesh. A service mesh can help address the challenges of microservices in terms of security. It secures the services as it integrates multiple layers of security with Kong plugins. It also features secure communication at every step of the request lifecycle. Seamless Connections This release connects services in the mesh to services across all environments, platforms, and vendors. Kong 1.0 can be used to bridge the gap between cloud-native design and traditional architecture patterns. Robust plugin architecture This release comes with a robust plugin architecture that offers users unparalleled flexibility. Kong plugins provide key functionality and supports integrations with other cloud-native technologies including Prometheus, Zipkin, and many others. Kong’s plugins can now execute code in the new preread phase which improves performance. AWS Lambda and Azure FaaS Kong 1.0 comes with improvements to interactions with AWS Lambda and Azure FaaS, including Lambda Proxy Integration. The Azure Functions plugin can be used to filter out headers disallowed by HTTP/2 when proxying HTTP/1.1 responses to HTTP/2 clients. Deprecations in Kong 1.0 Core The API entity and related concepts such as the /apis endpoint have been removed from this release. Routes and Services are used instead. The old DAO implementation and the old schema validation library are removed. New Admin API Filtering now happens withURL path changes (/consumers/x/plugins) instead of querystring fields (/plugins?consumer_id=x) Error messages have been reworked in this release to be more consistent, precise and informative. The PUT method has been reimplemented.   Plugins The galileo plugin has been removed. Some internal modules, that were used by plugin authors before the introduction of the Plugin Development Kit (PDK) in 0.14.0 have been removed now. Internal modules that have been removed include, kong.tools.ip module, kong.tools.public module and  kong.tools.responses module. Major bug fixes SNIs (Server Name Indication) are now correctly paginated. With this release, null & default values are now handled better. Datastax Enterprise 6.X doesn't throw errors anymore. Several typos, style and grammar fixes have been made. The router doesn’t inject an extra / in certain cases. Read more about this release on Kong’s blog post. Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year
Read more
  • 0
  • 0
  • 3428

article-image-windows-server-2019-comes-with-security-storage-and-other-changes
Prasad Ramesh
21 Dec 2018
5 min read
Save for later

Windows Server 2019 comes with security, storage and other changes

Prasad Ramesh
21 Dec 2018
5 min read
Today, Microsoft unveiled new features of Windows Server 2019. The new features are based on four themes—hybrid, security, application platform, and Hyper-Converged Infrastructure (HCI). General changes Windows Server 2019, being a Long-Term Servicing Channel (LTSC) release, includes Desktop Experience. During setup, there are two options to choose from: Server Core installations or Server with Desktop Experience installations. A new feature called System Insights brings local predictive analytics capabilities to Windows Server 2019. This feature is powered by machine learning and aimed to help users reduce operational expenses associated with managing issues in Windows Server deployments. Hybrid cloud in Windows Server 2019 Another feature called the Server Core App Compatibility feature on demand (FOD) greatly improves the app compatibility in the Windows Server Core installation option. It does so by including a subset of binaries and components from Windows Server with the Desktop Experience included. This is done without adding the Windows Server Desktop Experience graphical environment itself. The purpose is to increase the functionality of Windows server while keeping a small footprint. This feature is optional and is available as a separate ISO to be added to Windows Server Core installation. New measures for security There are new changes made to add a new protection protocol, changes in virtual machines, networking, and web. Windows Defender Advanced Threat Protection (ATP) Now, there is a Windows Defender program called Advanced Threat Protection (ATP). ATP has deep platform sensors and response actions to expose memory and kernel level attacks. ATP can respond via suppressing malicious files and also terminating malicious processes. There is a new set of host-intrusion prevention capabilities called the Windows Defender ATP Exploit Guard. The components of ATP Exploit Guard are designed to lock down and protect a machine against a wide variety of attacks and also block behaviors common in malware attacks. Software Defined Networking (SDN) SDN delivers many security features which increase customer confidence in running workloads, be it on-premises or as a cloud service provider. These enhancements are integrated into the comprehensive SDN platform which was first introduced in Windows Server 2016. Improvements to shielded virtual machines Now, users can run shielded virtual machines on machines which are intermittently connected to the Host Guardian Service. This leverages the fallback HGS and offline mode features. There are troubleshooting improvements to shield virtual machines by enabling support for VMConnect Enhanced Session Mode and PowerShell Direct. Windows Server 2019 now supports Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines. Changes for faster and safer web Connections are coalesced to deliver uninterrupted and encrypted browsing. For automatic connection failure mitigation and ease of deployment, HTTP/2’s server-side cipher suite negotiation is upgraded. Storage Three storage changes are made in Windows Server 2019. Storage Migration Service It is a new technology that simplifies migrating servers to a newer Windows Server version. It has a graphical tool that lists data on servers and transfers the data and configuration to newer servers. Their users can optionally move the identities of the old servers to the new ones so that apps and users don’t have to make changes. Storage Spaces Direct There are new features in Storage Spaces Direct: Deduplication and compression capabilities for ReFS volumes Persistent memory has native support Nested resiliency for 2 node hyper-converged infrastructure at the edge Two-server clusters which use a USB flash drive as a witness Support for Windows Admin Center Display of performance history Scale up to 4 petabytes per cluster Mirror-accelerated parity is two times faster Drive latency outlier detection Fault tolerance is increased by manually delimiting the allocation of volumes Storage Replica Storage Replica is now also available in Windows Server 2019 standard edition. A new feature called test failover allows mounting of destination storage to validate replication or backup data. Performance improvements are made and Windows Admin Center support is added. Failover clustering New features in failover clustering include: Addition of cluster sets and Azure-aware clusters Cross-domain cluster migration USB witness Cluster infrastructure improvements Cluster Aware Updating supports Storage Spaces Direct File share witness enhancements Cluster hardening Failover Cluster no longer uses NTLM authentication Application platform changes in Windows Server 2019 Users can now run Windows and Linux-based containers on the same container host by using the same docker daemon. Changes are being continually done to improve support for Kubernetes. A number of improvements are made to containers such as changes to identity, compatibility, reduced size, and higher performance. Now, virtual network encryption allows virtual network traffic encryption between virtual machines that communicate within subnets and are marked as Encryption Enabled. There are also some improvements to network performance for virtual workloads, time service, SDN gateways, new deployment UI, and persistent memory support for Hyper-V VMs. For more details, visit the Microsoft website. OpenSSH, now a part of the Windows Server 2019 Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 7615
article-image-windows-sandbox-an-environment-to-safely-test-exe-files-is-coming-to-windows-10-next-year
Prasad Ramesh
20 Dec 2018
2 min read
Save for later

Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year

Prasad Ramesh
20 Dec 2018
2 min read
Microsoft will be offering a new tool called Windows Sandbox next year with a Windows 10 update. Revealed this Tuesday, it provides an environment to safely test EXE applications before running them on your computer. Windows sandbox features Windows Sandbox is an isolated desktop environment where users can run untrusted software without any risk of them having any effects on your computer. Any application you install in Windows Sandbox is contained in the sandbox and cannot affect your computer. All software with their files and state are permanently deleted when a Windows Sandbox is closed. You need Windows 10 Pro or Windows 10 Enterprise to use it and will be shipped with an update, no separate download needed. Every run of Windows Sandbox is new and runs like a fresh installation of Windows. Everything is deleted when you close Windows Sandbox. It uses hardware-based virtualization for kernel isolation based on Microsoft’s hypervisor. A separate kernel isolates it from the host machine. It has an integrated kernel scheduler and virtual GPU. Source: Microsoft website Requirements In order to use this new feature based on Hyper-V, you’ll need, AMD64 architecture, virtualization capabilities enabled in BIOS, minimum 4GB RAM (8GB recommended), 1 GB of free disk space (SSD recommended), and dual-core CPU (4 cores with hyperthreading recommended). What are the people saying The general sentiment towards this release is positive. https://twitter.com/AnonTechOps/status/1075509695778041857 However, a comment on Hacker news suggests that this might not be that useful for its intended purpose: “Ironically, even though the recommended use for this in the opening paragraph is to combat malware, I think that will be the one thing this feature is no good at. Doesn’t even moderately sophisticated malware these days try to detect if it’s in a sandbox environment? A fresh-out-of-the-box Windows install must be a giant red flag for that.” Meanwhile, if you’re on Windows 7 or Windows 8, you can try Sandboxie. For more technical details under the hood of Sandbox, visit the Microsoft website. Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency” Are containers the end of virtual machines?
Read more
  • 0
  • 0
  • 6436

article-image-oracle-releases-virtualbox-6-0-0-with-improved-graphics-user-interface-and-more
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more

Amrata Joshi
19 Dec 2018
2 min read
Yesterday, the team at Oracle released VirtualBox 6.0.0, a free and open-source hosted hypervisor for x86 computers. VirtualBox was initially developed by Innotek GmbH, which was then acquired by Sun Microsystems in 2008 and then by Oracle in 2010. VirtualBox is a virtualization product for enterprise as well as home use. It is an extremely feature rich, high-performance product for enterprise customers. Features of VirtualBox 6.0.0 User interface Virtual 6.0.0 comes with a greatly improved HiDPI and scaling support which includes better detection and per-machine configuration. User interface is simpler and more powerful. It also comes with a new file manager that enables users to control the guest file system and copy files between host and guest. Graphics VirtualBox 6.0.0 features 3D graphics support for Windows guests, and VMSVGA 3D graphics device emulation on Linux and Solaris guests. It comes with an added support for surround speaker setups. It also comes with added utility vboximg-mount on Apple hosts for accessing the content of guest disks on the host. In VirtualBox 6.0.0, there is an added support for Hyper-V to avoid the inability to run VMs at low performance. VirtualBox 6.0.0 comes with support for exporting a virtual machine to Oracle cloud infrastructure This release comes with a better application and virtual machine set-up Linux guests This release now supports Linux 4.20 and VMSVGA. The process of building vboxvideo on EL 7.6 standard kernel has been improved with this release. Other features Support for DHCP options. MacOS Guest initial support. Now it is possible to configure upto four custom ACPI tables for a VM. With this release, video and audio recordings can be separately enabled. Better support for attaching and detaching remote desktop connections. Major bug fixes The previous release used to throw wrong instruction after single-step exception with rdtsc. This issue has been resolved with this release. This release comes with improved audio/video recording. This issues with serial port emulation have been fixed. The resizing issue with disk images has been resolved. This release comes with an improved shared folder for auto-mounting. Issues with BIOS has been fixed. Read more about this news on VirtualBox’s changelog. Installation of Oracle VM VirtualBox on Linux Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS How to Install VirtualBox Guest Additions
Read more
  • 0
  • 0
  • 6340