Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Servers

57 Articles
article-image-slack-was-down-for-an-hour-yesterday-causing-disruption-during-work-hours
Fatema Patrawala
30 Jul 2019
2 min read
Save for later

Slack was down for an hour yesterday, causing disruption during work hours

Fatema Patrawala
30 Jul 2019
2 min read
Yesterday, Slack reported of an outage which started at 7:23 a.m. PDT and was fully resolved at 8:48 a.m. PDT. The Slack status page said that some people had issues in sending messages while others couldn't access their channels at all. Slack said it was fully up and running again about an hour after the issues emerged. https://twitter.com/SlackStatus/status/1155869112406437889 According to Business Insider, more than 2,000 users reported issues with Slack via Downdetector. Employees around the globe, rely on Slack to communicate, organize tasks and share information. Downdetector’s live outage map showed a concentration of reports in the United States and a few of them in Europe and Japan. Slack has not yet shared the reason which caused the disruption on its status page. Last month as well Slack had suffered an outage which was caused due to server unavailability. Users took to Twitter sending funny memes and gifs about how they really depend on Slack all the time to communicate. https://twitter.com/slabodnick/status/1155858811518930946 https://twitter.com/gbhorwood/status/1155864432527867905 https://twitter.com/envyvenus/status/1155857852625555456 https://twitter.com/nhetmalaluan/status/1155863456991436800 While on Hacker News, users were annoyed and said that such issues have become quite common. One user commented, “This is becoming so often it's embarrassing really. The way it's handled in the app is also not ideal to say the least - only indication that something is wrong is that the text you are trying to send is greyed out.” Why did Slack suffer an outage on Friday? How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services
Read more
  • 0
  • 0
  • 2264

article-image-twitter-experienced-major-outage-yesterday-due-to-an-internal-configuration-issue
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Twitter experienced major outage yesterday due to an internal configuration issue

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Twitter went down across major parts of the world including the US and the UK. Twitter users reported being unable to access the platform on web and mobile devices. The outage lasted on the site for approximately an hour. According to DownDetector.com, the site began experiencing major issues at 2:46pm EST, with problems being reported from users attempting to access Twitter through its website, iPhone or iPad app and via Android devices. While the majority of problems being reported from Twitter were website issues (51%), nearly 30% were from iPhone and iPad app usage and another 18% from Android users, as per the outage report. Twitter acknowledged that the platform was experiencing issues on its status page shortly after the first outages were reported online. The company listed the status as “investigating” and noted a service disruption was causing the seemingly global issue. “We are currently investigating issues people are having accessing Twitter,” the statement read. “We will keep you updated on what's happening.” This month has experienced several high-profile outages among social networks. Facebook and Instagram experienced a day-long outage affecting large parts of the world on July 3rd. LinkedIn went down for several hours on Wednesday. Cloudfare suffered two major outages in the span of two weeks this month. One was due to an internal software glitch and another was caused when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Reddit was experiencing outages on its website and app earlier in the day, but appeared to be back up and running for most users an hour before Twitter went down, according to DownDetector.com. In March, Facebook and its family of apps experience a 14 hour long outage which was reasoned as server config change issue. Twitter site then began operating normally nearly an hour later at approximately 3:45pm EST. The users on Twitter joked saying they were "all censored for the last hour" when the site eventually was back up and running. On the status page of the outage report Twitter said that the outage was caused due to “an internal configuration change, which we're now fixing.” “Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible,” the company said in a follow up statement. https://twitter.com/TwitterSupport/status/1149412158121267200 On Hacker News too users discussed about number of outages in major tech companies and why is this happening. One of the user comments reads, “Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses: 1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability. 2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change 3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted 4) Just a series of unconnected errors at big companies 5) Other possibilities?” On this comment another user adds, “I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4. #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.” Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Facebook family of apps hits 14 hours outage, longest in its history How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others
Read more
  • 0
  • 0
  • 2487

article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 3219

article-image-why-did-slack-suffer-an-outage-on-friday
Fatema Patrawala
01 Jul 2019
4 min read
Save for later

Why did Slack suffer an outage on Friday?

Fatema Patrawala
01 Jul 2019
4 min read
On Friday, Slack, an instant messaging platform for work spaces confirmed news of the global outage. Millions of users reported disruption in services due to the outage which occurred early Friday afternoon. Slack experienced a performance degradation issue impacting users from all over the world, with multiple services being down. Yesterday the Slack team posted a detailed incident summary report of the service restoration. The Slack status page read: “On June 28, 2019 at 4:30 a.m. PDT some of our servers became unavailable, causing degraded performance in our job processing system. This resulted in delays or errors with features such notifications, unfurls, and message posting. At 1:05 p.m. PDT, a separate issue increased server load and dropped a large number of user connections. Reconnection attempts further increased the server load, slowing down customer reconnection. Server capacity was freed up eventually, enabling all customers to reconnect by 1:36 p.m. PDT. Full service restoration was completed by 7:20 p.m. PDT. During this period, customers faced delays or failure with a number of features including file uploads, notifications, search indexing, link unfurls, and reminders. Now that service has been restored, the response team is continuing their investigation and working to calculate service interruption time as soon as possible. We’re also working on preventive measures to ensure that this doesn’t happen again in the future. If you’re still running into any issues, please reach out to us at feedback@slack.com.” https://twitter.com/SlackStatus/status/1145541218044121089 These were the various services which were affected due to outage: Notifications Calls Connections Search Messaging Apps/Integrations/APIs Link Previews Workspace/Org Administration Posts/Files Timeline of Friday’s Slack outage According to user reports it was observed that some Slack messages were not delivered with users receiving an error message. On Friday, at 2:54 PM GMT+3, Slack status page gave the initial signs of the issue,  "Some people may be having an issue with Slack. We’re currently investigating and will have more information shortly. Thank you for your patience,". https://twitter.com/SlackStatus/status/1144577107759996928 According to the Down Detector, Slack users noted that message editing also appeared to be impacted by the latest bug. Comments indicated it was down around the world, including Sweden, Russia, Argentina, Italy, Czech Republic, Ukraine and Croatia. The Slack team continued to give updates on the issue, and on Friday evening they reported of services getting back to normal. https://twitter.com/SlackStatus/status/1144806594435117056 This news gained much attraction on Twitter, as many of them commented saying Slack is already preps up for the weekend. https://twitter.com/RobertCastley/status/1144575285980999682 https://twitter.com/Octane/status/1144575950815932422 https://twitter.com/woutlaban/status/1144577117788790785   Users on Hacker News compared Slack with other messaging platforms like Mattermost, Zulip chat, Rocketchat etc. One of the user comments read, “Just yesterday I was musing that if I were King of the (World|Company) I'd want an open-source Slack-alike that I could just drop into the Cloud of my choice and operate entirely within my private network, subject to my own access control just like other internal services, and with full access to all message histories in whatever database-like thing it uses in its Cloud. Sure, I'd still have a SPOF but it's game over anyway if my Cloud goes dark. Is there such a project, and if so does it have any traction in the real world?” To this another user responded, “We use this at my company - perfectly reasonable UI, don't know about the APIs/integrations, which I assume are way behind Slack…” Another user also responded, “Zulip, Rocket.Chat, and Mattermost are probably the best options.” Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration Slack launches Enterprise Key Management (EKM) to provide complete control over encryption keys  
Read more
  • 0
  • 0
  • 2797

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 4008

article-image-microsoft-finally-makes-hyper-v-server-2019-available-after-a-delay-of-more-than-six-months
Vincy Davis
18 Jun 2019
3 min read
Save for later

Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months

Vincy Davis
18 Jun 2019
3 min read
Last week, Microsoft announced that Hyper-V server, one of the variants in the Windows 10 October 2018/1809 release is finally available, on the Microsoft Evaluation Center. This release comes after a delay of more than six months, since the re-release of Windows Server 1809/Server 2019 in early November. It has also been announced that Hyper-V Server 2019 will  be available to Visual Studio Subscription customers, by 19th June 2019. Microsoft Hyper-V Server is a free product, and includes all the great Hyper-V virtualization features like the Datacenter Edition. It is ideal to use when running on Linux Virtual Machines or VDI VMs. Microsoft had originally released the Windows Server 10 in October 2018. However it had to pull both the client and server versions of 1809 down, for investigating the reports of users of users missing files, after updating to the latest Windows 10 feature update. Microsoft then re-released Windows Server 1809/Server 2019 in early November 2018, but without the Hyper-V Server 2019. Read More: Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Early this year, Microsoft made Windows Server 2019 evaluation media available on the Evaluation Center, but the Hyper-V Server 2019 was still missing. Though no official statement was provided by the Microsoft officials, it is suspected that it may be due to errors with the working of Remote Desktop Services (RDS). Later in April, Microsoft officials stated that they found some issues with the media, and will release an update soon. Now that the Hyper-V Server 2019 is finally going to be available, it can put all users of Windows Server 2019 at ease. Users who had managed to download the original release of Hyper-V Server 2019 while it was available, are advised to delete it and install the new version, when it will be made available on 19th June 2019. Users are happy with this news, but are still wondering what took Microsoft so long to come up with the Hyper-V Server 2019. https://twitter.com/ProvoSteven/status/1139926333839028224 People are also skeptical about the product quality. A user on Reddit states that “I'm shocked, shocked I tell you! Honestly, after nearly 9 months of MS being unable to release this, and two months after they said the only thing holding it back were "problems with the media", I'm not sure I would trust this edition. They have yet to fully explain what it is that held it back all these months after every other Server 2019 edition was in production.” Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 2927
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mariadb-announces-the-release-of-mariadb-enterprise-server-10-4
Amrata Joshi
12 Jun 2019
4 min read
Save for later

MariaDB announces the release of MariaDB Enterprise Server 10.4

Amrata Joshi
12 Jun 2019
4 min read
Yesterday, the team at MariaDB announced the release of MariaDB Enterprise Server 10.4, which is code named as “restful nights”. It is a hardened and secured server which is also different from MariaDB’s Community Server. This release is focused on solving enterprise customer needs, offering them greater reliability, stability as well as long-term support in production environments. MariaDB Enterprise Server 10.4 and its backported versions will be available to customers by the end of the month as part of the MariaDB Platform subscription. https://twitter.com/mariadb/status/1138737719553798144 The official blog post reads, “For the past couple of years, we have been collaborating very closely with some of our large enterprise customers. From that collaboration, it has become clear that their needs differ vastly from that of the average community user. Not only do they have different requirements on quality and robustness, they also have different requirements for features to support production environments. That’s why we decided to invest heavily into creating a MariaDB Enterprise Server, to address the needs of our customers with mission critical production workloads.” MariaDB Enterprise Server 10.4 comes with added functionality for enterprises that are running MariaDB at scale in production environments. It also involves new levels of testing and is shipped in by default secure configuration. It also includes the same features of MariaDB Server 10.4, including bitemporal tables, an expanded set of instant schema changes and a number of improvements to authentication and authorization (e.g., password expiration and automatic/manual account locking) Max Mether, VP of Server Product Management, MariaDB Corporation, wrote in an email to us, “The new version of MariaDB Server is a hardened database that transforms open source into enterprise open source.” He further added, “We worked closely with our customers to add the features and quality they need to run in the most demanding production environments out-of-the-box. With MariaDB Enterprise Server, we’re focused on top-notch quality, comprehensive security, fast bug fixes and features that let our customers run at internet-scale performance without downtime.” James Curtis, Senior Analyst, Data Platforms and Analytics, 451 Research, said, “MariaDB has maintained a solid place in the database landscape during the past few years.” He added, “The company is taking steps to build on this foundation and expand its market presence with the introduction of MariaDB Enterprise Server, an open source, enterprise-grade offering targeted at enterprise clients anxious to stand up production-grade MariaDB environments.” Reliability and stability MariaDB Enterprise Server 10.4 offers reliability and stability that is required for production environments. In this server, even bugs are fixed that further help in maintaining reliability. The key enterprise features are backported for the ones running earlier versions of MariaDB Server, and provide long-term support. Security Unsecured databases are most of the times the reason for data breaches. But the MariaDB Enterprise Server 10.4is configured with security settings to support enterprise applications. All non-GA plugins will be disabled by default in order to reduce the risks incurred when using unsupported features. Further, the default configuration is changed to enforce strong security, durability and consistency. Enterprise backup MariaDB Enterprise Server 10.4 offers enterprise backup that brings operational efficiency to customers with large databases and further breaks up the backups into non-blocking stages. So this way, writes and schema changes can occur during backups than waiting for backup to complete. Auditing capabilities This server adds secure, stronger and easier auditing capabilities by logging all changes to the audit configuration. It also logs detailed connection information that gives the customers a comprehensive view of changes made to the database. End-to-end encryption It also offers end-to-end encryption for multi-master clusters where the transaction buffers are encrypted that ensure that the data is secure. https://twitter.com/holgermu/status/1138511727610478594 Learn more about this news on the official web page. MariaDB CEO says big proprietary cloud vendors “strip-mining open-source technologies and companies” MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool
Read more
  • 0
  • 0
  • 2747

article-image-opensuse-may-go-independent-from-suse-reports-lwn-net
Vincy Davis
03 Jun 2019
3 min read
Save for later

OpenSUSE may go independent from SUSE, reports LWN.net

Vincy Davis
03 Jun 2019
3 min read
Lately, the relationship between SUSE and openSUSE community has been under discussion. Different options are being considered, among which the possibility of setting up openSUSE into an entirely independent foundation is gaining momentum. This will enable openSUSE to have greater autonomy and control over its own future and operations. Though openSUSE board chair Richard Brown and SUSE leadership have publicly reiterated that SUSE remains committed to openSUSE. There has been a lot of concern over the ability of openSUSE to be able to operate in a sustainable way, without being entirely beholden to SUSE. The idea of an independent openSUSE foundation has popped up many times in the past. Former openSUSE board member Peter Linnell says, “Every time, SUSE has changed ownership, this kind of discussion pops up with some mild paranoia IMO, about SUSE dropping or weakening support for openSUSE”. He also adds, “Moreover, I know SUSE's leadership cares a lot about having a healthy independent openSUSE community. They see it as important strategically and the benefits go both ways.” On the contrary, openSUSE Board member Simon Lees says, “it is almost certain that at some point in the future SUSE will be sold again or publicly listed, and given the current good working relationship between SUSE and openSUSE it is likely easier to have such discussions now vs in the future should someone buy SUSE and install new management that doesn't value openSUSE in the same way the current management does.” In an interview with LWN, Brown described the conversation between SUSE and the broader community, about the possibility of an independent foundation, as being frank, ongoing, and healthy. He also mentioned that everything from a full independent openSUSE foundation to a tweaking of the current relationship that provides more legal autonomy for openSUSE can be considered. Also, there is a possibility for some form of organization to be run under the auspices of the Linux Foundation. Issues faced by openSUSE Brown has said, “openSUSE has multiple stakeholders, but it currently doesn't have a separate legal entity of its own, which makes some of the practicalities of having multiple sponsors rather complicated”. Under the current arrangement, it is difficult for OpenSUSE to directly handle financial contributions. Sponsorship and the ability to raise funding have become a prerequisite for the survival of openSUSE. Brown comments, “openSUSE is in continual need of investment in terms of both hardware and manpower to 'keep the lights on' with its current infrastructure”. Another concern has been the tricky collaboration between the community and the company across all SUSE products. In particular, Brown has stated issues with the openSUSE Kubic and SUSE Container-as-a-Service Platform. With a more distinctly separate openSUSE, the implication and the hope is that openSUSE projects will have increased autonomy over its governance and interaction with the wider community. According to LWN, if openSUSE becomes completely independent, it will have increased autonomy over its governance and interaction with the wider community. Though different models for openSUSE's governance are under consideration, Brown has said, “The current relationship between SUSE and openSUSE is unique and special, and I see these discussions as enhancing that, and not necessarily following anyone else's direction”. There has also been no declaration of any hard deadline in place. For more details, head over to LWN article. SUSE is now an independent company after being acquired by EQT for $2.5 billion 389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings Salesforce open sources ‘Lightning Web Components framework’
Read more
  • 0
  • 0
  • 3062

article-image-unity-editor-will-now-officially-support-linux
Vincy Davis
31 May 2019
2 min read
Save for later

Unity Editor will now officially support Linux

Vincy Davis
31 May 2019
2 min read
Yesterday Martin Best, Senior Technical Product Manager at Unity, briefly announced that the Unity Editor will now officially support Linux. Currently the Editor is available only on ‘preview’ for Ubuntu and CentOS, but Best has stated that it will be fully supported by Unity 2019.3. Another important note is to make sure that before opening projects via the Linux Editor, the 3rd-party tools also support it. Unity has been offering an unofficial, experimental Unity Editor for Linux since 2015. Unity had released the 2019.1 version in April this year, in which it was mentioned that the Unity editor for Linux has moved into preview mode from the experimental status. Now the status has been made official. Best mentions in the blog post, “growing number of developers using the experimental version, combined with the increasing demand of Unity users in the Film and Automotive, Transportation, and Manufacturing (ATM) industries means that we now plan to officially support the Unity Editor for Linux.” The Unity Editor for Linux will be accessible to all Personal (free), Plus, and Pro licenses users, starting with Unity 2019.1. It will be officially supported on the following configurations: Ubuntu 16.04, 18.04 CentOS 7 x86-64 architecture Gnome desktop environment running on top of X11 windowing system Nvidia official proprietary graphics driver and AMD Mesa graphics driver Desktop form factors, running on device/hardware without emulation or compatibility layer Users are quite happy that the Unity Editor will now officially support Linux. A user on Reddit comments, “Better late than never.” Another user added, “Great news! I just used the editor recently. The older versions were quite buggy but the latest release feels totally on par with Windows. Excellent work Unity Linux team!” https://twitter.com/FourthWoods/status/1134196011235237888 https://twitter.com/limatangoalpha/status/1134159970973470720 For the latest builds, check out the Unity Hub. For giving feedback on the Unity Editor for Linux, head over to the Unity Forum page. Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players. Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Unity updates its TOS, developers can now use any third party service that integrate into Unity
Read more
  • 0
  • 0
  • 6472

article-image-ubuntu-19-04-disco-dingo-beta-releases-with-support-for-linux-5-0-and-gnome-3-32
Bhagyashree R
01 Apr 2019
2 min read
Save for later

Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32

Bhagyashree R
01 Apr 2019
2 min read
Last week, the team behind Ubuntu announced the release of Ubuntu 19.04 Disco Dingo Beta, which comes with Linux 5.0 support, GNOME 3.32, and more. Its stable version is expected to release on April 18th, 2019. Following are some of the updates in Ubuntu 19.04 Disco Dingo: Updates in Linux kernel Ubuntu 19.04 is based on Linux 5.0, which was released last month. It comes with support for AMD Radeon RX Vega M graphics processor, complete support for the Raspberry Pi 3B and the 3B+, Qualcomm Snapdragon 845, and much more. Toolchain Upgrades The tools are upgraded to their latest releases. The upgraded toolchain includes glibc 2.29, OpenJDK 11, Boost 1.67, Rustc 1.31, and updated GCC 8.3, Python 3.7.2 as default,  Ruby 2.5.3, PHP 7.2.15, and more. Updates in Ubuntu Desktop This release ships with the latest GNOME 3.32 giving it a refreshed visual design. It also brings a few performance improvements and new features: GNOME Disks now supports VeraCrypt, a utility used for on-the-fly encryption. A panel is added to the Settings menu to help users manage Thunderbolt devices. With this release, more shell components are cached in GPU RAM, which reduces load and increases FPS count. Desktop zoom works much smoother. An option is added to automatically submit error reports to the error reporting dialog window. Other updates include new Yaru icon sets, Mesa 19.0, QEMU 13.1, and libvirt 14.0. This release will be supported for 9 months until January 2020. Users who require Long Term Support are recommended to use Ubuntu 18.04 LTS instead. To read the full list of updates, visit Ubuntu’s official website. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Ubuntu releases Mir 1.0.0 Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released
Read more
  • 0
  • 0
  • 5712
article-image-sailfish-os-3-0-2-named-oulanka-now-comes-with-improved-power-management-and-more-features
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Sailfish OS 3.0.2, named Oulanka, now comes with improved power management and more features

Bhagyashree R
28 Mar 2019
2 min read
Last week, Jolla announced the release of Sailfish OS 3.0.2. This release goes by the name Oulanka, which is a national park in Lapland and Northern Ostrobothnia regions of Finland. Along with 44 fixed issues, this release brings in a battery saving mode, better connectivity, new device management APIs, and more. Improved power management Sailfish OS Oulanka comes with a battery saving mode, which is enabled by default when the battery goes lower than 20%. Additionally, users can also specify the battery saving threshold themselves by going to the “Battery” section in the settings menu. Better connectivity Improvements are made in this release so that Sailfish OS better handles scenarios when a large number of Bluetooth and WLAN devices are connected to the network. Now, Bluetooth and WLAN network scan will not slow down your devices. Also, many updates have been made in the Firewall introduced in the previous release, Sipoonkorpi, for better robustness. Updates in Corporate API This release comes with several improvements in the Corporate API. New device management APIs are added including data counters, call statistics, location data sources, proxy settings, app auto start, roaming status, and cellular settings. Sailfish X Beta for Xperia XA2 Sailfish X, the downloadable version of Sailfish OS for select devices, continues to be in Beta for XA2 with the Oulanka update. With this release, the team has improved several aspects of Android 8.1 Support Beta for XA2 devices. Now, Android apps will be able to connect to the internet more reliably via mobile data. To know more in detail about Sailfish OS Oulanka, check out the official announcement. An early access to Sailfish 3 is here! Linux 5.1 will come with Intel graphics, virtual memory support, and more The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration
Read more
  • 0
  • 0
  • 3401

article-image-uber-open-sources-peloton-a-unified-resource-scheduler
Natasha Mathur
27 Mar 2019
2 min read
Save for later

Uber open-sources Peloton, a unified Resource Scheduler

Natasha Mathur
27 Mar 2019
2 min read
Earlier this month, Uber open-sourced Pelton, a unified resource scheduler that manages resources across distinct workloads. Pelton, first introduced in November last year, is built on top of Mesos. “By allowing others in the cluster management community to leverage unified schedulers and workload co-location, Peloton will open the door for more efficient resource utilization and management across the community”, states the Uber team. Peloton is designed for web-scale companies such as Uber that consist of millions of containers and tens of thousands of nodes. Peloton comes with advanced resource management capabilities such as elastic resource sharing, hierarchical max-min fairness, resource overcommits, and workload preemption. Peloton uses Mesos to aggregate resources from different hosts and then further launch tasks as Docker containers. Peloton also makes use of hierarchical resource pools to manage elastic and cluster-wide resources more efficiently. Before Peloton was released, each workload at Uber comprised its own cluster which resulted in various inefficiencies. However, with Peloton, mixed workloads can be colocated in shared clusters for better resource utilization. Peloton feature highlights Elastic Resource Sharing: Peloton supports hierarchical resource pools that help elastically share resources among different teams. Resource Overcommit and Task Preemption: Peloton helps with improving cluster utilization by scheduling workloads that use slack resources. Optimized for Big Data Workloads:  Support has been provided for advanced Apache Spark features such as dynamic resource allocation. Optimized for Machine Learning: There is support provided for GPU and Gang scheduling for TensorFlow and Horovod. High Scalability: Users can scale to millions of containers and tens of thousands of nodes. “Open sourcing Peloton will enable greater industry collaboration and open up the software to feedback and contributions from industry engineers, independent developers, and academics across the world”, states the Uber team. Uber and Lyft drivers strike in Los Angeles Uber and GM Cruise are open sourcing their Automation Visualization Systems Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts
Read more
  • 0
  • 0
  • 2693

article-image-facebook-and-microsoft-announce-open-rack-v3-to-address-the-power-demands-from-artificial-intelligence-and-networking
Bhagyashree R
18 Mar 2019
3 min read
Save for later

Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking

Bhagyashree R
18 Mar 2019
3 min read
From the past few months, Facebook and Microsoft together have been working on a new architecture based on the Open Rack standards. Last week, Facebook announced a new initiative that aims to build uniformity around the Rack & Power design. The Rack & Power Project Group is responsible for setting the rack standards designed for data centers, integrating the rack into the data center infrastructure. This project comes under a larger initiative started by Facebook called Open Compute Project. Why a new version of Open Rack is needed? Today, the industry is turning to AI and ML systems to solve several difficult problems. Though these systems are helpful, at the same time, they require increased power density at both the component level and the system level. The ever-increasing bandwidth speed demand for networking systems has also led to similar problems. So, in order to improve the overall system performance, it is important to get memory, processors, and system fabrics as close together as possible. This new architecture of Open Rack will come with greater benefits as compared to the current version, Open Rack V2. “For this next version, we are collaborating to create flexible, interoperable, and scalable solutions for the community through a common OCP architecture. Accomplishing this goal will enable wider adoption of OCP technologies across multiple industries, which will benefit operators, solution providers, original design manufacturers, and configuration managers,” shared Facebook in the blog post. What are the goals of this initiative? This new initiative aims to achieve the following goals A common OCP rack architecture to enable greater sharing between Microsoft and Facebook. Creating a flexible frame and power infrastructure that will support a wide range of solutions across the OCP community Apart from the features need by Facebook, this architecture will come with additional features for the larger community, including physical security for solutions deployed in co-location facilities. New thermal solutions will be introduced such as liquid cooling manifolds, door-based heat exchanges, and defined physical and thermal interfaces. These solutions are currently under development by the Advanced Cooling Solutions sub-project. Introducing new power and battery backup solutions that scale across different rack power levels and also accommodate different power input types. To know more in detail, check out the official announcement on Facebook. Two top executives leave Facebook soon after the pivot to privacy announcement Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 3622
article-image-user-discovers-bug-in-debian-stable-kernel-upgrade-armmp-package-affected
Melisha Dsouza
18 Feb 2019
3 min read
Save for later

User discovers bug in debian stable kernel upgrade; armmp package affected

Melisha Dsouza
18 Feb 2019
3 min read
Last week, Jürgen Löb, a Debian user, discovered a bug in the linux-image-4.9.0-8-armmp-lpae package of the Debian system. The version of the system affected is 4.9.144-3. The user states that he updated his Lamobo R1 board with apt update; apt upgrade. However, after the update, uboot was struck at "Starting kernel" with no further output after the same. The same issue was faced by him on Bananapi 1 board. He performed the following steps to recover his system: downgrading to a backup kernel by mounting the boot partition on the sd card. dd if=boot.scr of=boot.script bs=72 skip=1 (extract script) replaced the following command in boot.script: setenv fk_kvers '4.9.0-8-armmp-lpae' with setenv fk_kvers '4.9.0-7-armmp-lpae'  (backup kernel was available on his boot            partition) Then execute: mkimage -C none -A arm -T script -d boot.script boot.scr After performing these steps he was able to boot the system with the old kernel Version and restore the previous version (4.9.130-2) with the following command: dpkg -i linux-image-4.9.0-8-armmp-lpae_4.9.130-2_armhf.deb He cross-checked the issue and said that upgrading to 4.9.144-3 again after these steps results in the above unbootable behavior. Thus concluding, that the upgrade to 4.9.144-3 is causing the said problem. Timo Sigurdsson, another Debian user stated that “I recovered both systems by replacing the contents of the directories /boot/ and /lib/modules/ with those of a recent backup (taken 3 days ago). After logging into the systems again, I downgraded the package linux-image-4.9.0-8-armmp-lpae to 4.9.130-2 and rebooted again in order to make sure no other package upgrade caused the issue. Indeed, with all packages up-to-date except linux-image-4.9.0-8-armmp-lpae, the systems work just fine. So, there must be a serious regression in 4.9.144-3 at least on armmp-lpae”. In response to this thread, multiple users replied with other instances of broken packages, like plain armmp (non-lpae) is broken for Armada385/Caiman and QEMU's. Vagrant Cascadian, another user added to the list that all of the armhf boards running this kernel failed to boot, including: imx6: Cubox-i4pro, Cubox-i4x4, Wandboard Quad exynos5: Odroid-XU4 exynos4: Odroid-U3 rk3328: firefly-rk3288 sunxi A20: Cubietruck The Debian team has not reverted back with any official response. You can head over to the debian bugs page for more information on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 3056

article-image-openwrt-18-06-2-released-with-major-bug-fixes-updated-linux-kernel-and-more
Amrata Joshi
04 Feb 2019
3 min read
Save for later

OpenWrt 18.06.2 released with major bug fixes, updated Linux kernel and more!

Amrata Joshi
04 Feb 2019
3 min read
Last week the team at OpenWrt announced the second service release of the stable OpenWrt 18.06 series, OpenWrt 18.06.2. OpenWrt is a Linux operating system that targets embedded devices and provides a fully writable filesystem with optional package management. It is also considered to be a complete replacement for the vendor-supplied firmware of a wide range of wireless routers and non-network devices. What’s new in OpenWrt 18.06.2? OpenWrt 18.06.2 comes with bug fixes in the network and the build system and updates to the kernel and base packages. In OpenWrt 18.06.2, Linux kernel has been updated to versions 4.9.152/4.14.95 (from 4.9.120/4.14.63 in v18.06.1). GNU time dependency has been removed. This release comes with added support for bpf match. In this release, a blank line has been inserted after KernelPackage template to allow chaining calls. INSTALL_SUID macro has been added. This release comes with added support for enabling the rootfs/boot partition size option via tar. Building of artifacts has been introduced. Package URL has been updated. Un-initialized return value has been fixed. Major bug fixes The docbook2man error has been fixed. The issues with libressl build on x32 (amd64ilp32) host has been fixed. The build has been fixed without modifying Makefile.am. Fedora patch has been added for crashing git style patches. The syntax error has been fixed. Security fixes for the Linux kernel, GNU patch, Glibc, BZip2, Grub, OpenSSL, and MbedTLS. IPv6 and network service fixes. Few of the users are happy about this release and they think despite small teams and budgets, the team at OpenWrt has done a wonderful job by powering so many routers. One of the comment reads, “The new release still works fine on a TP-Link TL-WR1043N/ND v1 (32MB RAM, 8MB Flash). This is an old router I got from the local reuse center for $10 a few years ago. It can handle a 100 Mbps fiber connection fine and has 5 gigabit ports. Thanks Openwrt!” But the question is if cheap routers affect the internet speed. One of the users commented on HackerNews, “My internet is too fast (150 mbps) for a cheap router to effectively manage the connection, meaning that unless I pay 250€ for a router, I will just slow down my Internet needlessly.” Read more about this news on the OpenWrt’s official blog post. Mapzen, an open-source mapping platform, joins the Linux Foundation project Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack The Haiku operating system has released R1/beta1
Read more
  • 0
  • 0
  • 3817