Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - High Performance

10 Articles
article-image-it-is-supposedly-possible-to-increase-reproducibility-from-54-to-90-in-debian-buster
Melisha Dsouza
06 Mar 2019
2 min read
Save for later

It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!

Melisha Dsouza
06 Mar 2019
2 min read
Yesterday, Holger Levsen, a member of the team maintaining reproducible.debian.net, started a discussion on reproducible builds, stating that “Debian Buster will only be 54% reproducible (while we could be at >90%)”. He started off by stating that tests indicate Debian Buster’s 26476 source packages (92.8%) out of 28523 source packages in total can be built reproducibly in buster/amd64. The 28523 source packages build 57448 binary packages. Next, by looking at binary packages that Debian actually distributes, he says that Vagrant came up with an idea to check buildinfo.debian.net for .deb files for which there exists 2 or more .buildinfo. Turning this into a Jenkins job, he checked the above idea for all 57448 binary packages (including downloading all those .deb files from ftp.d.o)  in amd64/buster/main. He obtained the following results: reproducible packages in buster/amd64: 30885: (53.7600%) unreproducible packages in buster/amd64: 26543: (46.2000%) and reproducible binNMUs in buster/amd64: 0: (0%) unreproducible binNMU in buster/amd64: 7423: (12.9200%) He suggests that binNMUs are unreproducible because of their design and his proposed solution to obtain reproducible nature is that 'binNMUs should be replaced by easy "no-change-except-debian/changelog-uploads'. This means a 12% increase in reproducibility from 54%. Next, he also discovered that 6804 source packages need a rebuild from December 2016. This is because these packages were built with an old dpkg not producing .buildinfo files. 6804 of 28523 accounts for 23.9%. Summing everything up- 54%+12%+24% equals 90% reproducibility. Refer to the entire discussion thread for more details on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable User discovers bug in debian stable kernel upgrade; armmp package affected Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 2789

article-image-adacore-joins-the-risc-v-foundation-adds-support-for-c-and-ada-compilation
Prasad Ramesh
04 Feb 2019
2 min read
Save for later

AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation

Prasad Ramesh
04 Feb 2019
2 min read
Last week, AdaCore announced that they are now a member of the RISC-V Foundation. The Risc-V Foundation is a non-profit organization, which provides the free and open-source instruction set architecture (ISA) called RISC-V. RISC-V was created by the Computer Science Division, EECS Department at the University of California in Berkeley with their foundation is supported by over 200 members. The ISA in RISC-V can be implemented via either open-source or proprietary architectures. This allows chip designers to use an assembly language that is designed with clarity. By becoming a part of the RISC-V Foundation, AdaCore’s Ada and SPARK languages are made available to RISC-V developers. This offers them an environment where they can develop applications where safety and security are critical. The first few product offerings from AdaCore—GNAT Pro Ada and GNAT Pro C—are made for bare metal RISC-V 32- and 64-bit architectures. They can also be used for the GNAT Community edition for bare metal RISC-V 32-bit configurations. Rick O’Connor, executive director of the RISC-V Foundation said: “We’re happy to see Ada joining the front row of the languages available to the RISC-V ecosystem. This will create an extremely appealing option for RISC-V users with the most stringent reliability requirements.” Quentin Ochem, the lead of Business Development at AdaCore said: “As we’re seeing the growth of Ada in new projects and markets, RISC-V has rapidly emerged as an indispensable ecosystem to be part of. We are fascinated by the opportunities it creates both at the technical and business levels, and we look forward to becoming an active member of the community.” To know more about AdaCore, visit the AdaCore website. Western Digital RISC-V SweRV Core is now on GitHub A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 2897

article-image-ibm-q-system-one-ibms-standalone-quantum-computer-unveiled-at-ces-2019
Sugandha Lahoti
09 Jan 2019
2 min read
Save for later

IBM Q System One, IBM’s standalone quantum computer unveiled at CES 2019

Sugandha Lahoti
09 Jan 2019
2 min read
At the ongoing CES 2019, IBM has unveiled what possibly is the world’s first standalone quantum computer. Dubbed, the IBM Q System One, it is a giant 50-qubit quantum computer that gives repeatable and predictable high-quality qubits. This is IBM’s first step forward in the commercialization of quantum computing as IBM Q System One steps out of the research lab for the first time. https://youtu.be/LAA0-vjTaNY IBM Q System One is comprised of a number of custom components. This includes a stable and auto-calibrated Quantum hardware. It has Cryogenic engineering for maintaining a cold and isolated quantum environment. The Quantum firmware manages system health and upgrades without downtime for users. Classical computation provides secure cloud access and hybrid execution of quantum algorithms. IBM is calling the Q System One as the future beyond supercomputing, capable of handling applications such as modeling financial data and organizing super-efficient logistics. “This new system is critical in expanding quantum computing beyond the walls of the research lab as we work to develop practical quantum applications for business and science,” said Arvind Krishna, senior vice-president of hybrid cloud and director of IBM Research. In the second half of 2019, IBM is planning to open the IBM Q Quantum Computation Center to expand IBM’s commercial quantum computing program. This new center will be accessible to members of the IBM Q Network. You may go through IBM’s Q-Experience FAQs and Beginner's guide to working with System Q for a much substantive understanding. The US to invest over $1B in quantum computing, President Trump signs a law UK researchers build the world’s first quantum compass to overthrow GPS Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale
Read more
  • 0
  • 0
  • 3142
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime
article-image-italian-researchers-conduct-an-experiment-to-prove-that-quantum-communication-is-possible-on-a-global-scale
Prasad Ramesh
26 Dec 2018
3 min read
Save for later

Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale

Prasad Ramesh
26 Dec 2018
3 min read
Researchers from Italy have published a research paper showcasing that quantum communication is feasible between high-orbiting satellites and a station on the ground. This new research proves that quantum communication is possible on a global scale by using a Global Navigation Satellite System (GNSS). The reports of the study are presented in a paper published last week titled Towards quantum communication from global navigation satellite system. In the experiment conducted, a single photon was exchanged over a distance of 20,000km between a ground station and a high-orbit satellite. The exchange was between the retroreflector array mounted on Russian GLONASS satellites and the Space Geodesy Centre on the Earth, Italian space agency. The challenge in high-orbit satellites is that the distance causes high diffraction losses in the channel. One of the co-authors, Dr. Giuseppe Vallone, University of Padova said to IOP Publishing: “Satellite-based technologies enable a wide range of civil, scientific and military applications like communications, navigation and timing, remote sensing, meteorology, reconnaissance, search and rescue, space exploration and astronomy.” He mentions that the crux of such systems is to safely transmit information from satellites in the air to the ground. It is important that these channels be protected from interference by third parties. “Space quantum communications (QC) represents a promising way to guarantee unconditional security for satellite-to-ground and inter-satellite optical links, by using quantum information protocols as quantum key distribution (QKD).” The quantum key distribution (QKD) protocols used in the experiment guarantee strong security for communication between satellites and satellites to Earth. In QKD, data is encrypted using quantum mechanics and interferences are detected quickly. Another co-author, Prof. Villoresi talks to IOP Publishing about their focus on high-orbit satellites despite the challenges: "The high orbital speed of low earth orbit (LEO) satellites is very effective for the global coverage but limits their visibility periods from a single ground station. On the contrary, using satellites at higher orbits can extend the communication time, reaching few hours in the case of GNSS.” After the experiments, the researchers estimated the requirements needed for an active source on a GNSS satellite. They aim towards QC from GNSS with state-of-the-art technology. This does not really mean faster internet/communication as only a single photon was transmitted in the experiment. This means that transferring large amounts of data quickly, i.e., faster internet is not likely gonna happen with this application. However, it does show that data transmission can be done over a large distance with a secure channel. For more details, you can check out the research paper on the IOPSCIENCE website. The US to invest over $1B in quantum computing, President Trump signs a law UK researchers build the world’s first quantum compass to overthrow GPS Quantum computing – Trick or treat?
Read more
  • 0
  • 0
  • 3336

article-image-the-us-to-invest-over-1b-in-quantum-computing-president-trump-signs-a-law
Prasad Ramesh
24 Dec 2018
3 min read
Save for later

The US to invest over $1B in quantum computing, President Trump signs a law

Prasad Ramesh
24 Dec 2018
3 min read
US President Donald Trump signed a bill called the National Quantum Initiative Act. This is a nation-wide quantum computing plan will establish goals for the next decade to accelerate the development of quantum technology. What is the National Quantum Initiative Act about? The bill for quantum technologies was originally introduced in June this year. This bill is a commitment that various departments such as the NIST, NSF, and Secretary of Energy together will provide $1.25B in funding from 2019 to 2023 to promote activities in the quantum information science. The new act and the funding that comes with it will boost quantum research in the US. As stated in the Act: “The bill defines ‘quantum information science’ as the storage, transmission, manipulation, or measurement of information that is encoded in systems that can only be described by the laws of quantum physics.” The president signed the bill as a law last week on Friday. What will the National Quantum Initiative Act allow? This bill aims to further USA’s position in the area of quantum information science and its technology applications. The bill will support research and development of quantum technologies that can lead to practical applications. It seeks to: Expand the workforce on quantum computing Promote research opportunities across various academic levels Address any knowledge gaps dd more facilities and centers for testing and education in this field Promote rapid development of quantum-based technologies The bill also seeks to: Improve the collaboration between the Federal Government of USA, its laboratories and industries, universities Promote the development of international standards for quantum information science Facilitate technology innovation and private sector commercialization Meet economic and security goals of USA The US President will work with Federal agencies, working groups, councils, subcommittees, etc., to set goals for the National Quantum Initiative Act. What’s the fuss with quantum computing? As we mentioned is a previous post: “Quantum computing uses quantum mechanics in quantum computers to solve a diverse set of complex problems. It uses qubits to store information in parallel dimensions. Quantum computers can work through a solution involving large parameters with far fewer operations than a standard computer.” This does not mean that a quantum computer is necessarily faster than a classical computer, a quantum computer is just better at solving complex problems that a regular one will take way too long if at all it can solve such problems. Quantum computers have great potential to solve future problems, and is hence drawing attention from tech companies and governments. Like D-Wave launching a Quantum cloud service, UK researchers working on quantum entanglements, and Rigetti working on a 128 qubit chip. What are the people saying? As is the general observation around the motivation for quantum computing, this comment from Reddit puts it nicely: “Make no mistake, this is not only about advancing computing power, but this is also about maintaining cryptographic dominance. Quantum computers will be able to break a lot of today's encryption.” Another comment expresses: “Makes sense, Trump has a tendency to be in 2 different states simultaneously.” You can read the bill in its entirety on the Congress Government website. Quantum computing – Trick or treat? Rigetti Computing launches the first Quantum Cloud Services to bring quantum computing to businesses Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible
Read more
  • 0
  • 0
  • 5067

article-image-cirq-0-4-0-released-for-writing-quantum-circuits
Prasad Ramesh
30 Nov 2018
3 min read
Save for later

Cirq 0.4.0 released for writing quantum circuits

Prasad Ramesh
30 Nov 2018
3 min read
Cirq is a Python library for writing quantum circuits and running them against quantum computers created by Google. Cirq 0.4.0 is now released and is available on GitHub. Changes in Cirq 0.4.0 themes The API is now more pythonic and more consistent with respect to breaking changes and refactoring. The simulation is faster. New functionality in Cirq 0.4.0 The following functions, parameters are added. cirq.Rx, cirq.Ry, and cirq.Rz cirq.XX, cirq.YY, cirq.ZZ, and cirq.MS related to the Mølmer–Sørensen gate cirq.Simulator cirq.SupportsApplyUnitary protocol is added to specify fast simulation methods cirq.Circuit.reachable_frontier_from and cirq.Circuit.findall_operations_between cirq.decompose sorted(qubits) and cirq.QubitOrder.DEFAULT.order_for(qubits) are now equivalent cirq.experiments.generate_supremacy_circuit_[...] dtype parameters are added to control the precision versus speed of simulations cirq.TrialResult helper methods (dirac_notation / bloch_vector / density_matrix) cirq.TOFFOLI and cirq.CCZ can be raised to powers Breaking changes in Cirq 0.4.0 Most of the gate classes have been standardized. They can now take an exponent argument and have a name which is of the form NamePowGate. For example, RotXGate is now XPowGate and it no longer takes rads, degs, or half_turns. The xmon gate set has now been merged into the common gate set. The capability marker classes have been replaced by magic method protocols. As an example, gates now just implement a _unitary_ method as opposed to inheriting from KnownMatrix. cirq.Extensions and cirq.PotentialImplementation are removed. Many decomposition classes and methods have been moved from cirq.google.* to cirq.*. Example: cirq.google.EjectFullW is now cirq.EjectPhasedPaulis. The classes and methods related to line placement are moved into cirq.google. Notable bug fixes A two-qubit gate decomposition will no longer produce a glut of single qubit gates. When multi-line entries are given, circuit diagrams stay aligned. They now include "same moment" indicators. The false-positives and false-negatives are fixed in cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent. Many repr methods returning code are fixed that assumed from cirq import * instead of import cirq. Example code now runs in both Python 2 and Python 3 without the need for transpilation. Notable dev changes The test files now import cirq instead of just specific modules. There is better testing and packaging of scripts. The package versions for Python 2 and Python 3 are no longer different. cirq.value_equality decorator is added. New cirq.testing methods and classes are added. Additions to contrib cirq.contrib.acquaintance: New utilities for defining permutation gates cirq.contrib.paulistring: Utilities for optimizing non-Clifford operations which are separated by Clifford operations cirq.contrib.tpu: Utilities for converting circuits into an executable form to be used on cloud TPUs. This requires TensorFlow. Google AdaNet, a TensorFlow-based AutoML framework Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet A new Model optimization Toolkit for TensorFlow can make models 3x faster
Read more
  • 0
  • 0
  • 3135
article-image-uk-researchers-build-the-worlds-first-quantum-compass-to-overthrow-gps
Sugandha Lahoti
12 Nov 2018
2 min read
Save for later

UK researchers build the world’s first quantum compass to overthrow GPS

Sugandha Lahoti
12 Nov 2018
2 min read
British researchers have successfully built the world’s first standalone quantum compass, which will act as a replacement for GPS as it allows highly accurate navigation without the need for satellites. This quantum compass was built by researchers from Imperial College London and Glasgow-based laser firm M Squared. The project received funding from the UK Ministry of Defence (MoD) under the UK National Quantum Technologies Programme. The device is completely self-contained and transportable and measures how an object's velocity changes over time, by using the starting point of an object and measuring how an object's velocity changes. Thereby, it overcomes issues of traditional GPS systems, such as blockages from tall buildings or signal jamming. High precision and accuracy are achieved by measuring properties of super-cool atoms, which means any loss in accuracy is "immeasurably small". Dr. Joseph Cotter, from the Centre for Cold Matter at Imperial, said: “When the atoms are ultra-cold we have to use quantum mechanics to describe how they move, and this allows us to make what we call an atom interferometer. As the atoms fall, their wave properties are affected by the acceleration of the vehicle. Using an ‘optical ruler’, the accelerometer is able to measure these minute changes very accurately.” The first real-world application for the device could be seen in the shipping industry, The size currently is suitable for large ships or aircraft. However, researchers are already working on a miniature version that could eventually fit in a smartphone. The team is also working on using the principle behind the quantum compass for research in dark energy and gravitational waves. Dr. Graeme Malcolm, founder, and CEO of M Squared said: “This commercially viable quantum device, the accelerometer, will put the UK at the heart of the coming quantum age. The collaborative efforts to realize the potential of quantum navigation illustrate Britain’s unique strength in bringing together industry and academia – building on advancements at the frontier of science, out of the laboratory to create real-world applications for the betterment of society.” Read the press release on the Imperial College blog. Quantum computing – Trick or treat? D-Wave launches Leap, a free and real-time Quantum Cloud Service Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible
Read more
  • 0
  • 0
  • 4134

article-image-salesforces-open-sourcing-centrifuge-a-library-for-accelerating-jvm-restarts
Amrata Joshi
02 Nov 2018
3 min read
Save for later

Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts

Amrata Joshi
02 Nov 2018
3 min read
Yesterday, Paymon Teyer, a principal member of Technical Staff at Salesforce, introduced Centrifuge as a library, which is also a framework for scheduling and running startup and warmup tasks. It focuses mainly on accelerating JVM restarts. It also provides an interface for implementing warmup tasks, like, calling an HTTP endpoint, populating caches and handling pre-compilation tasks for generated code. When the JVM restarts in a production environment, it affects the performance of the server. The JVM has to reload classes, trigger reflection inflation, rerun its JIT compiler on any code paths, reinitialize objects and dependency injections, and populate component caches. The performance impact of JVM restarts can be minimized by allowing individual components to execute arbitrary warmup logic themselves, after a cold start. To make this possible, Centrifuge was created with the goal of executing warmup tasks. It also manages resource usage and handles failures. Centrifuge allows users to register and configure warmup tasks either descriptively or programmatically. It also schedules tasks, manages and monitors threads, handles exceptions and retries, and provides status reports. Centrifuge supports the following two categories of warmup tasks: Blocking tasks Blocking tasks prevent the application from returning to the available server pool until they complete. These tasks must be executed for the application to function properly. For example, executing source code generators or populating a cache from storage to meet SLA requirements. Non-blocking tasks Non- blocking tasks execute asynchronously and don’t interfere with the application’s readiness. These tasks do the work which is needed after an application restarts but is not required immediately for the application to be in a consistent state. Examples include warmup logic that triggers JIT compilation on code paths or eagerly triggering dependency injection and object creation. How to Use Centrifuge? The first step is to include a Maven dependency for Centrifuge in the POM Then implementing the Warmer interface for each of the warmup tasks. The warmer class should have an accessible default constructor and it should not swallow InterruptedException. The warmers can be registered either programmatically with code or descriptively with a configuration file. For adding and removing warmers without recompiling, the warmers should be registered descriptively within a configuration file. Then the configuration file needs to be loaded into the Centrifuge. How is the HTTP Warmer useful? Centrifuge provides a simple HTTP warmer which is used to call HTTP endpoints to trigger code path. It is also exercised by the resource implementing the endpoint. If an application provides a homepage URL and when called, connects to a database, populates the caches, etc., then the HTTP warmer can warm these code paths. Read more about Centrifuge on Salesforce’s official website. About Java Virtual Machine – JVM Languages Tuning Solr JVM and Container Concurrency programming 101: Why do programmers hang by a thread?
Read more
  • 0
  • 0
  • 2896

article-image-intel-optane-dc-persistent-memory-available-first-on-google-cloud
Melisha Dsouza
01 Nov 2018
2 min read
Save for later

Intel Optane DC Persistent Memory available first on Google Cloud

Melisha Dsouza
01 Nov 2018
2 min read
On 30th October, Google announced in a blog post the alpha availability of virtual machines with 7TB of total memory utilizing Intel Optane DC persistent memory. The partnership of Google, SAP and Intel announced in July empowers users to handle and store large amounts of data, and run in-memory databases such as SAP HANA. Now, with the availability of Intel Optane DC Persistent Memory on Google cloud, GCP customers will have the ability to scale up their workloads while benefiting from all the infrastructure capabilities and flexibility of Google Cloud. Features of Intel Optane DC Persistent Memory Intel Optane DC persistent memory has two special operating modes - App Direct mode and Memory mode. The  App Direct mode allows applications to receive the full value of the product’s native persistence and larger capacity. In Memory mode, applications running in a supported operating system or virtual environment can use the persistent memory as volatile memory, while utilizing an increase in system capacity made possible from module sizes up to 512 GB without rewriting software. Unlike traditional DRAM, Intel Optane DC persistent memory offers high-capacity, affordability, and persistence. Systems deploying this technology will result in improved analytics, database and in-memory database, artificial intelligence, high-capacity virtual machines and containers, and content delivery networks. The technology reduces in-memory database restart times from days or hours to minutes or seconds and expands system memory capacity. Google also stated that early customers have almost a 12x improvement in SAP HANA startup times using Intel Optane DC Persistent Memory.  Alibaba, Cisco, Dell EMC, Fujitsu, Hewlett Packard Enterprise are some of the many companies to announced beta services and systems for early customer trials and deployments of this technology. The search engine giant has hinted at a larger Optane-based VM offering to follow in 2019. To know more about this news, head over to Google Cloud’s official Blog post. What’s new in Google Cloud Functions serverless platform Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Google Cloud Next: Fei-Fei Li reveals new AI tools for developers
Read more
  • 0
  • 0
  • 4051
article-image-is-atlassians-decision-to-forbid-benchmarking-potentially-masking-its-degrading-performance
Savia Lobo
02 Oct 2018
3 min read
Save for later

Is Atlassian’s decision to forbid benchmarking potentially masking its degrading performance?

Savia Lobo
02 Oct 2018
3 min read
Last week, Atlassian software company released their updated ‘Atlassian Software License Agreement’ and their ‘Cloud Terms of Service’, which would be effective from the 1st of November, 2018. Just as any general agreement, this one too mentions the scope, the users authorized, use of software, and so on. However, it has set up certain restrictions based on the performance of its software. As per the new agreement, benchmarking of Atlassian software is forbidden. Restrictions on benchmarking As per the discussion on the Atlassian Developer Community, Andy Broker highlighted two clauses from the restriction section have been highlighted, which includes: (i) publicly disseminate information regarding the performance of the Software Andy Broker, a marketplace vendor explains this clause as, “This sounds very much like the nonsense clause that Intel were derided for, regarding the performance of their CPU’s. Intel backtracked after being lambasted by the world, I can’t really understand how these points got into new Atlassian terms, surely the terms have had a technical review? Just… why, given all the DC testing being done ongoing, this is an area where data we gathered may be interesting to prospective customers.” (j) encourage or assist any third party to do any of the foregoing. Andy Broker further adds, “So, we can’t guide/help a customer understand how to even measure performance to determine if they have a performance issue in the “Software”, e.g. generating a performance baseline before an ‘app’ is involved? The result of this would appear sub-optimal for Customer and Vendors alike, the “Software” performance just becomes 3rd party App performance that we cannot ‘explain’ or ‘show’ to customers.” Why Atlassian decided to forbid benchmarking? As per a discussion thread on Hacker News, many users have stated their views on why Atlassian planned to forbid benchmarking on its software. According to a comment, “If the company bans benchmarking, then the product is slow.” A user also stated that Atlassian software is slow, has annoying UX, and it is very inconsistent. This may be because most of its software is built using JIRA which has a Java backend and is not Node based. Jira cannot be rebuilt from scratch only slowly abstracted and broken up into smaller pieces. Also, around 3 years ago Atlassian forked behind the firewall for two distinct products, multi-tenant cloud and traditional to get into the cloud sector with a view to attracting more potential customers, thus increasing growth. A user also stated, “Cloud was full buy into AWS, taking a behind the firewall product and making it multi-tenanted cloud-based is a huge job. A monolith service now becomes highly distributed so latency's obviously mounted up due to the many services to service interactions.” The user further added, “some things which are heavily multi-threaded and built in statically compiled languages had to be built in single threaded Node.js because everyone is using Node.js and your language is now banned. It's not surprising there are noticeable performance differences.” Another user has suggested, “the better way for a company to handle this concern is to proactively run and release benchmarks including commentary on the results, together with everything necessary for anyone to reproduce their results.” The user added that they can even fund a trustworthy neutral third party to perform benchmarking with proper disclosure of the funding. To read the entire discussion in detail, head over to Hacker News. Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 3427