Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 10177

article-image-introducing-opendrop-an-open-source-implementation-of-apple-airdrop-written-in-python
Vincy Davis
21 Aug 2019
3 min read
Save for later

Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python

Vincy Davis
21 Aug 2019
3 min read
A group of German researchers recently published a paper “A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link”, at the 28th USENIX Security Symposium (August 14–16), USA. The paper reveals security and privacy vulnerabilities in Apple’s AirDrop file-sharing service as well as denial-of-service (DoS) attacks which leads to privacy leaks or simultaneous crashing of all neighboring devices. As part of the research, Milan Stute and Alexander Heinrich, two researchers have developed an open-source implementation of Apple AirDrop written in Python - OpenDrop. OpenDrop is like a FOSS implementation of AirDrop. It is an experimental software and is the result of reverse engineering efforts by the Open Wireless Link project (OWL). It is compatible with Apple AirDrop and used for sharing files among Apple devices such as iOS and macOS or on Linux systems running an open re-implementation of Apple Wireless Direct Link (AWDL). The OWL project consists of researchers from the Secure Mobile Networking Lab at TU Darmstadt looking into Apple’s wireless ecosystem. It aims to assess security, privacy and enables cross-platform compatibility for next-generation wireless applications. Currently, OpenDrop only supports Apple devices. However, it does not support all features of AirDrop and may be incompatible with future AirDrop versions. It uses the current version of OpenSSL and libarchive and requires Python 3.6+ version. OpenDrop is licensed under the GNU General Public License v3.0. It is not affiliated with or endorsed by Apple Inc. Limitations in OpenDrop Triggering macOS/iOS receivers via Bluetooth Low Energy: Since Apple devices begin their AWDL interface and AirDrop server only after receiving a custom advertisement via Bluetooth LE, it is possible that Apple AirDrop receivers may not be discovered. Sender/Receiver authentication and connection state: Currently, OpenDrop does not conduct peer authentication. It does not verify that the TLS certificate is signed by Apple's root or not. Also, OpenDrop accepts any file that it receives automatically. Sending multiple files: OpenDrop does not support sending multiple files for sharing, a feature supported by Apple’s AirDrop. Users are excited to try the new OpenDrop implementation. A Redditor comments, “Yesssss! Will try this out soon on Ubuntu.” Another comment reads, “This is neat. I did not realize that enough was known about AirDrop to reverse engineer it. Keep up the good work.” Another user says, “Wow, I can’t wait to try this out! I’ve been in the Apple ecosystem for years and AirDrop was the one thing I was really going to miss.” Few Android users wish to see such implementations in an Android app. A user on Hacker News says, “Would be interesting to see an implementation of this in the form of an Android app, but it looks like that might require root access.” A Redditor comments, “It'd be cool if they were able to port this over to android as well.” To know how to send and receive files using OpenDrop, check out its Github page. Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020 Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage ‘FaceTime Attention Correction’ in iOS 13 Beta 3 uses ARKit to fake eye contact
Read more
  • 0
  • 0
  • 10172

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 10071
Banner background image

article-image-watermelon-db-a-new-relational-database-to-make-your-react-and-react-native-apps-highly-scalable
Bhagyashree R
11 Sep 2018
2 min read
Save for later

Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable

Bhagyashree R
11 Sep 2018
2 min read
Now you can store your data in Watermelon! Yesterday, Nozbe released Watermelon DB v0.6.1-1, a new addition to the database world. It aims to help you build powerful React and React Native apps that scale to large number of records and remain fast. Watermelon architecture is database-agnostic, making it cross-platform. It is a high-level layer for dealing with data, but can be plugged in to any underlying database, depending on platform needs. Why choose Watermelon DB? Watermelon DB is optimized for building React and React Native complex applications. Following are the factors that help in ensuring high speed of applications: It makes your application highly scalable by using lazy loading, which means Watermelon DB loads data only when it is requested. Most queries resolve in less than 1ms, even with 10,000 records, as all querying is done on SQLite database on a separate thread. You can launch your app instantly irrespective of how much data you have. It is supported on iOS, Android, and the web. It is statically typed keeping Flow, a static type checker for JavaScript, in mind. It is fast, asynchronous, multi-threaded, and highly cached. It is designed to be used with a synchronization engine to keep the local database up to date with a remote database. Currently, Watermelon DB is in active development and cannot be used in production. Their roadmap states that, migrations will soon be added to allow the production use of Watermelon DB. Schema migrations is the mechanism by which you can add new tables and columns to the database in a backward-compatible way. To know how you can install it and to try few examples, check out Watermelon DB on GitHub. React Native 0.57 coming soon with new iOS WebViews What’s in the upcoming SQLite 3.25.0 release: windows functions, better query optimizer and more React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more!
Read more
  • 0
  • 0
  • 10000

article-image-rusts-original-creator-graydon-hoare-on-the-current-state-of-system-programming-and-safety
Bhagyashree R
20 Jun 2019
4 min read
Save for later

Rust’s original creator, Graydon Hoare on the current state of system programming and safety

Bhagyashree R
20 Jun 2019
4 min read
Back in July 2010, Graydon Hoare showcased the Rust programming language for the very first time at Mozilla Annual Summit. Rust is an open-source system programming language that was created with speed, memory safety, and parallelism in mind. Looking at Rust’s memory and thread safety guarantees, a supportive community, a quickly evolving toolchain, many major projects are being rewritten in Rust. And, one of the major ones was Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust for rewriting many other key parts of Firefox under Project Quantum. Fastly chose Rust to implement Lucet, its native WebAssembly compiler and runtime. More recently, Facebook also chose Rust to implement its controversial Libra blockchain. As the 9th anniversary of the day when Hoare first presented Rust in front of a large audience is approaching, The New Stack took a very interesting interview with him. In the interview, he talked about the current state of system programming, how safe he considers our current complex systems are, how they can be made safer, and more. Here are the key highlights from the interview: Hoare on a brief history of Rust Hoare started working on Rust as a side-project in 2006. Mozilla, his employer at that time, got interested in the project and provided him a team of engineers to help him in the further development of the language. In 2013, he experienced burnout and decided to step down as a technical lead. After working on some less-time-sensitive projects, he quit Mozilla and worked for the payment network, Stellar. In 2016, he got a call from Apple to work on the Swift programming language. Rust is now being developed by the core teams and an active community of volunteer coders. This programming language that he once described as “spare-time kinda thing” is being used by many developers to create a wide range of new software applications from operating systems to simulation engines for virtual reality. It was also "the most loved programming language" in the Stack Overflow Developer Survey for four years in a row (2016-2019). Hoare was very humble about the hard work and dedication he has put into creating the Rust programming language. When asked to summarize Rust’s history he simply said that “we got lucky”.  He added, “that Mozilla was willing to fund such a project for so long; that Apple, Google, and others had funded so much work on LLVM beforehand that we could leverage; that so many talented people in academia, industry and just milling about on the internet were willing to volunteer to help out.” The current state of system programming and safety Hoare considers the state of system programming language “healthy” as compared to the starting couple of decades in his career. Now, it is far easier to sell a language that is focused on performance and correctness. We are seeing more good languages coming into the market because of the increasing interaction between academia and industry. When asked about safety, Hoare believes that though we are slowly taking steps towards better safety, the overall situation is not getting better. He attributes building a number of new complex computing systems is making it worse. He said, “complexity beyond comprehension means we often can’t even define safety, much less build mechanisms that enforce it.” Another reason according to him is the huge number of vulnerable software present in the field that can be exploited anytime by a bad actor. For instance, on Tuesday, a zero-day vulnerability was fixed in Firefox that was being “exploited in the wild” by attackers. “Like much of the legacy of the 20th century, there’s just a tremendous mess in software that’s going to take generations to clean up, assuming humanity even survives that long,” he adds. How system programming can be made safer Hoare designed Rust with safety in mind. Its rich type system and ownership model ensures memory and thread safety. However, he suggests that we can do a lot better when it comes to safety in system programming. He listed a bunch of new improvements that we can implement, “information flow control systems, effect systems, refinement types, liquid types, transaction systems, consistency systems, session types, unit checking, verified compilers and linkers, dependent types.” Hoare believes that there are already many features suggested by academia. The main challenge for us is to implement these features “in a balanced, niche-adapted language that’s palatable enough to industrial programmers to be adopted and used.” You can read Hoare’s full interview on The New Stack. Rust 1.35.0 released Rust shares roadmap for 2019 Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more
Read more
  • 0
  • 0
  • 9983

article-image-meet-yuzu-an-experimental-emulator-for-the-nintendo-switch
Sugandha Lahoti
17 Jul 2018
3 min read
Save for later

Meet yuzu – an experimental emulator for the Nintendo Switch

Sugandha Lahoti
17 Jul 2018
3 min read
The makers of Citra, an emulator for the Nintendo 3DS, have released a new emulator called yuzu. This emulator is made for the Nintendo Switch, which is the 7th major video game console from Nintendo. The journey so far for yuzu Yuzu was initiated as an experimental setup by Citra’s lead developer bunnei after he saw that there were signs of the Switch’s operating system being based on the 3DS’s operating system. yuzu has the same core code as Citra and much of the same OS High-Level Emulation (HLE). The core emulation and memory management of yuzu are based on Citra, albeit modified to work with 64-bit addresses. It also has a loader for the Switch games and Unicorn integration for CPU emulation. Yuzu uses Reverse Engineering process to figure out how games work, and how the Switch GPU works. Switch’s GPU is more advanced than 3DS’ used in Citra and poses multiple challenges to reverse engineer it. However, the RE process of yuzu is essentially the same as Citra. Most of their RE and other development is being done in a trial-and-error manner. OS emulation The Switch’s OS is based Nintendo 3DS’s OS. So the developers used a large part of Citra’s OS HLE code for yuzu OS. The loader and file system service was reused from Citra and modified to support Switch game dump files. The Kernel OS threading, scheduling, and synchronization fixes for yuzu were also ported from Citra’s OS implementation. The save data functionality, which allowed games to read and write files to the save data directory was also taken from 3DS. Switchbrew helped them create libnx, a userland library to write homebrew apps for the Nintendo Switch. (Homebrew is a popular term used for applications that are created and executed on a video game console by hackers, programmers, developers, and consumers.) The Switch IPC (Inter-process communication) process is much more robust and complicated than the 3DS’s. Their system has different command modes, a typical IPC request response, and a Domain to efficiently conduct multiple service calls. Yuzu uses the Nvidia services to configure the video driver to get the graphics output. However, Nintendo re-purposed the Android graphics stack and used it in the Switch for rendering. And so yuzu developers had to implement this even to get homebrew applications to display graphics. The Next Steps Being at a nascent stage, yuzu still has a long way to go. The developers still have to add HID (user input support) such as support for all 9 controllers, rumble, LEDs, layouts etc. Currently, the Audio HLE is in progress, but they still have to implement audio playback. Audio playback, if implemented properly, would be a major breakthrough as most complicated games often hang or go into a deadlock because of this issue. They are also working on resolving minor fixes to help them boot further in games like Super Mario Odyssey, 1-2-Switch, and The Binding of Issac. Be sure to read the entire progress report on the yuzu blog. AI for game developers: 7 ways AI can take your game to the next level AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior Unity 2018.2: Unity release for this year 2nd time in a row!
Read more
  • 0
  • 0
  • 9842
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-firewall-ports-you-need-to-open-for-availability-groups-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
6 min read
Save for later

Firewall Ports You Need to Open for Availability Groups from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
6 min read
Something that never ceases to amaze me is the frequent request for help on figuring out what ports are needed for Availability Groups in SQL Server to function properly. These requests come from a multitude of reasons such as a new AG implementation, to a migration of an existing AG to a different VLAN. Whenever these requests come in, it is a good thing in my opinion. Why? Well, that tells me that the network team is trying to instantiate a more secure operating environment by having segregated VLANs and firewalls between the VLANs. This is always preferable to having firewall rules of ANY/ANY (I correlate that kind of firewall rule to granting “CONTROL” to the public server role in SQL Server). So What Ports are Needed Anyway? If you are of the mindset that a firewall rule of ANY/ANY is a good thing or if your Availability Group is entirely within the same VLAN, then you may not need to read any further. Unless, of course, if you have a software firewall (such as Windows Defender / Firewall) running on your servers. If you are in the category where you do need to figure out which ports are necessary, then this article will provide you with a very good starting point. Windows Server Clustering – TCP/UDP Port Description TCP/UDP 53 User & Computer Authentication [DNS] TCP/UDP 88 User & Computer Authentication [Kerberos] UDP 123 Windows Time [NTP] TCP 135 Cluster DCOM Traffic [RPC, EPM] UDP 137 User & Computer Authentication [NetLogon, NetBIOS , Cluster Admin, Fileshare Witness] UDP 138 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] TCP 139 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] UDP 161 SNMP TCP/UDP 162 SNMP Traps TCP/UDP 389 User & Computer Authentication [LDAP] TCP/UDP 445 User & Computer Authentication [SMB, SMB2, CIFS, Fileshare Witness] TCP/UDP 464 User & Computer Authentication [Kerberos Change/Set Password] TCP 636 User & Computer Authentication [LDAP SSL] TCP 3268 Microsoft Global Catalog TCP 3269 Microsoft Global Catalog [SSL] TCP/UDP 3343 Cluster Network Communication TCP 5985 WinRM 2.0 [Remote PowerShell] TCP 5986 WinRM 2.0 HTTPS [Remote PowerShell SECURE] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}RPC and DCOM ] * SQL Server – TCP/UDP Port Description TCP 1433 SQL Server/Availability Group Listener [Default Port {CAN BE CHANGED}] TCP/UDP 1434 SQL Server Browser UDP 2382 SQL Server Analysis Services Browser TCP 2383 SQL Server Analysis Services Listener TCP 5022 SQL Server DBM/AG Endpoint [Default Port {CAN BE CHANGED}] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}] *Randomly allocated UDP port number between 49152 and 65535 So I have a List of Ports, what now? Knowing is half the power, and with great knowledge comes great responsibility – or something like that. In reality, now that know what is needed, the next step is to go out and validate that the ports are open and working. One of the easier ways to do this is with PowerShell. $RemoteServers = "Server1","Server2" $InbndServer = "HomeServer" $TCPPorts = "53", "88", "135", "139", "162", "389", "445", "464", "636", "3268", "3269", "3343", "5985", "5986", "49152", "65535", "1433", "1434", "2383", "5022" $UDPPorts = "53", "88", "123", "137", "138", "161", "162", "389", "445", "464", "3343", "49152", "65535", "1434", "2382" $TCPResults = @() $TCPResults = Invoke-Command $RemoteServers {param($InbndServer,$TCPPorts) $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $TCPPorts){ $PortCheck = (TNC -Port $p -ComputerName $InbndServer ).TcpTestSucceeded If($PortCheck -notmatch "True|False"){$PortCheck = "ERROR"} $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } $Object } -ArgumentList $InbndServer,$TCPPorts | select * -ExcludeProperty runspaceid, pscomputername $TCPResults | Out-GridView -Title "AG and WFC TCP Port Test Results" $TCPResults | Format-Table * #-AutoSize $UDPResults = Invoke-Command $RemoteServers {param($InbndServer,$UDPPorts) $test = New-Object System.Net.Sockets.UdpClient; $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $UDPPorts){ Try { $test.Connect($InbndServer, $P); $PortCheck = "TRUE"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } Catch { $PortCheck = "ERROR"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } } $Object } -ArgumentList $InbndServer,$UDPPorts | select * -ExcludeProperty runspaceid, pscomputername $UDPResults | Out-GridView -Title "AG and WFC UDP Port Test Results" $UDPResults | Format-Table * #-AutoSize This script will test all of the related TCP and UDP ports that are required to ensure your Windows Failover Cluster and SQL Server Availability Group works flawlessly. If you execute the script, you will see results similar to the following. Data Driven Results In the preceding image, I have combined each of the Gridview output windows into a single screenshot. Highlighted in Red is the result set for the TCP tests, and in Blue is the window for the test results for the UDP ports. With this script, I can take definitive results all in one screen shot and share them with the network admin to try and resolve any port deficiencies. This is just a small data driven tool that can help ensure quicker resolution when trying to ensure the appropriate ports are open between servers. A quicker resolution in opening the appropriate ports means a quicker resolution to the project and all that much quicker you can move on to other tasks to show more value! Put a bow on it This article has demonstrated a meaningful and efficient method to (along with the valuable documentation) test and validate the necessary firewall ports for Availability Groups (AG) and Windows Failover Clustering. With the script provided in this article, you can provide quick and value added service to your project along with providing valuable documentation of what is truly needed to ensure proper AG functionality. Interested in learning about some additional deep technical information? Check out these articles! Here is a blast from the past that is interesting and somewhat related to SQL Server ports. Check it out here. This is the sixth article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Firewall Ports You Need to Open for Availability Groups first appeared on SQL RNNR. Related Posts: Here is an Easy Fix for SQL Service Startup Issues… December 28, 2020 Connect To SQL Server - Back to Basics March 27, 2019 SQL Server Extended Availability Groups April 1, 2018 Single User Mode - Back to Basics May 31, 2018 Lost that SQL Server Access? May 30, 2018 The post Firewall Ports You Need to Open for Availability Groups appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 9830

article-image-creating-an-html-url-from-a-powershell-string-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
30 Dec 2020
2 min read
Save for later

Creating an HTML URL from a PowerShell String–#SQLNewBlogger from Blog Posts - SQLServerCentral

Anonymous
30 Dec 2020
2 min read
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. I wrote about getting a quick archive of SQL Saturday data last week, and while doing that, I had some issues building the HTML needed in PowerShell. I decided to work through this a bit and determine what was wrong. My original code looked like this: $folder = "E:DocumentsgitSQLSatArchiveSQLSatArchiveSQLSatArchiveClientApppublicAssetsPDF" $code = "" $list = Get-ChildItem -Path $folder ForEach ($File in $list) { #write-host($File.name) $code = $code + "<li><a href=$($File.Name)>$($File.BaseName)</a></li>" } write-host($code) This gave me the code I needed, which I then edited in SSMS to get the proper formatting. However, I knew this needed to work. I had used single quotes and then added in the slashes, but that didn’t work. This code: $folder = "E:DocumentsgitSQLSatArchiveSQLSatArchiveSQLSatArchiveClientApppublicAssetsPDF" $code = "" $list = Get-ChildItem -Path $folder ForEach ($File in $list) { #write-host($File.name) $code = $code + '<li><a href="/Assets/PDF/$($File.Name)" >$($File.BaseName)</a></li>' } write-host($code) produced this type of output: <li><a href="/Assets/PDF/$($File.Name)" >$($File.BaseName)</a></li> Not exactly top notch HTML. I decided that I should look around. I found a post on converting some data to HTML, which wasn’t what I wanted, but it had a clue in there. The double quotes. I needed to escape quotes here, as I wanted the double quotes around my string. I changed the line building the string to this: $code = $code + "<li><a href=""/Assets/PDF/$($File.Name)"" >$($File.BaseName)</a></li>" And I then had what I wanted: <li><a href="/Assets/PDF/1019.pdf" >1019</a></li> Strings in PoSh can be funny, so a little attention to escaping things and knowing about variables and double quotes is helpful. SQLNewBlogger This was about 15 minutes of messing with Google and PoSh to solve, but then only about 10 minutes to write up. A good example that shows some research, initiative, and investigation in addition to solving a problem. The post Creating an HTML URL from a PowerShell String–#SQLNewBlogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 9824

article-image-scrivito-launches-serverless-javascript-cms
Kunal Chaudhari
17 Apr 2018
2 min read
Save for later

Scrivito launches serverless JavaScript CMS

Kunal Chaudhari
17 Apr 2018
2 min read
Scrivito, a SaaS-based Content Management Service, launched a new breed of cloud-based serverless JavaScript CMS which is specifically targeted towards medium to large sized businesses. While the world is shifting to cutting-edge cloud technology, web CMS platforms are still stuck in the past. Thomas Witt, Co-Founder, and CTO of Scrivito said that “We’re at a tipping point. Agencies and dev teams that stick with Wordpress and the like are doomed to be overtaken by the inevitable shift to serverless computing and JavaScript development.” Scrivito checks the boxes for key trending tech innovations in the web development space. Serverless? Yes. Cloud native? Yes. So what’s unique about this cutting-edge content management interface and how exactly does it differentiate itself from the other traditional CMS? Scrivito requires zero maintenance thanks to the cloud This is the most unique feature of Scrivito. Since it is a cloud-based service, it allows developers to spin up a CMS instance without having to re-install anything or reconfigure databases, search engine indexing, backups or metadata. This leads to no downtime, no software patches, and minimal maintenance efforts. Component reusability powered by ReactJS Scrivito is powered by Facebook’s popular frontend framework-React. Thanks to its reusable UI components and its flexibility, developers can create complex and interactive functionalities such as configurators or multi-page forms with ease. Not only built for developers, it also makes it easier for agencies and marketing teams to build, edit and manage secure, reliable and cost-effective sites, microsites, and landing pages. Scrivito is extendable Scrivito is easily extendable because it doesn’t require any infrastructure. Developers and editors can create their own widgets and data structures on the fly. Due to its unique working copies technology, it brings version control technologies from software development to the CMS world, thus eliminating the need for a staging server and allowing parallel editing of content across teams. Plus, its API-driven approach provides the benefits of a serverless and a headless CMS together with WYSIWYG editing in a single solution. Scrivito has certainly ignited a revolution in the web development space by introducing serverless technologies to CMS applications. It is available at different price points for personal and enterprise users. To know more about other features and pricing options, check out the project's official webpage.
Read more
  • 0
  • 0
  • 9678

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 9621
article-image-cloudflare-finally-launches-warp-and-warp-plus-after-a-delay-of-more-than-five-months
Vincy Davis
27 Sep 2019
5 min read
Save for later

Cloudflare finally launches Warp and Warp Plus after a delay of more than five months

Vincy Davis
27 Sep 2019
5 min read
More than five months after announcing Warp, Cloudflare has finally made it available to the general public, yesterday. With two million people on the waitlist to try Warp, the Cloudflare team says that it took them harder than they thought to build a next-generation service to secure consumer mobile connections, without compromising on speed and power usage. Along with Warp, Cloudflare is also launching Warp Plus. Warp is a free VPN to the 1.1.1.1 DNS resolver app which will speed up mobile data using the Cloudflare network to resolve DNS queries at a faster pace. It also comes with end-to-end encryption and does not require users to install a root certificate to observe encrypted internet traffic. It is built around a UDP-based protocol that is optimized for the mobile internet and offers excellent performance and reliability. Why Cloudflare delayed the Warp release? A few days before Cloudflare announced Warp on April 1st, Apple released its new version iOS 12.2 with significant changes in its underlying network stack implementation. This made the Warp network unstable thus making the Cloudflare team arrange for workarounds in their networking code, which took more time. Cloudflare adds, “We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we'd anticipated.” As the internet is made up of diverse network components, the Cloudflare team found it difficult to include all the diversity of mobile carriers, mobile operating systems, and mobile device models in their network. The Cloudflare team also found it testing to include users’ diverse network settings in their network. Warp uses a technology called Anycast to route user traffic to the Cloudflare network, however, it moves the users’ data between entire data centers, which made the Warp functioning complex.  To overcome all these barriers, the Cloudflare team has now changed its approach by focussing more on iOS. The team has also solidified the shared underpinnings of the app to ensure that it would even work with future network stack upgrades. The team has also tested Warp with network-based users to discover as many corner cases as possible. Thus, the Cloudflare team has successfully invented new technologies to keep the session state stable even with multiple mobile networks. Cloudflare introduces Warp Plus - an unlimited version of Warp Along with Warp, the Cloudflare team has also launched Warp Plus, an unlimited version of WARP for a monthly subscription fee. Warp Plus is faster than Warp and uses Cloudflare’s Argo Smart Routing to achieve a higher speed than Warp. The official blog post states, “Routing your traffic over our network often costs us more than if we release it directly to the internet.” To cover these costs, Warp Plus will charge a monthly fee of $4.99/month or less, depending on the user location. The Cloudflare team also added that they will be launching a test tool within the 1.1.1.1 app in a few weeks to make users “see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus.” Read Also: Cloudflare plans to go public; files S-1 with the SEC  To know more details about Warp Plus, read the technical post by Cloudflare team. Privacy features offered by Warp and Warp Plus The 1.1.1.1 DNS resolver app provides strong privacy protections such as all the debug logs will be kept only long enough to ensure the security of the service. Also, Cloudflare will only retain the limited transaction data for legitimate operational and research purposes.  Warp will not only maintain the 1.1.1.1 DNS protection layers but will also ensure: User’s-identifiable log data will be written to disk The user’s browsing data will not be sold for advertising purposes Warp will not demand any personal information (name, phone number, or email address) to use Warp or Warp Plus Outside editors will regularly regulate Warp’s functioning The Cloudflare team has also notified users that the newly available Warp will have bugs present in them. The blog post also specifies that the most popular bug currently in Warp is due to traffic misroute, which is making the Warp function slower than the speed of non-Warp mobile internet.  Image Source: Cloudflare blog The team has made it easier for users to report bugs as they have to just click on the little bug icon near the top of the screen on the 1.1.1.1 app or shake their phone with the app open and send a bug report to Cloudflare. Visit the Cloudflare blog for more information on Warp and Warp Plus. Facebook will no longer involve third-party fact-checkers to review the political content on their platform GNOME Foundation’s Shotwell photo manager faces a patent infringement lawsuit from Rothschild Patent Imaging A zero-day pre-auth vulnerability is currently being exploited in vBulletin, reports an anonymous researcher
Read more
  • 0
  • 0
  • 9620

article-image-developers-ask-for-an-option-to-disable-docker-compose-from-automatically-reading-the-env-file
Bhagyashree R
18 Oct 2019
3 min read
Save for later

Developers ask for an option to disable Docker Compose from automatically reading the .env file

Bhagyashree R
18 Oct 2019
3 min read
In June this year, Jonathan Chan, a software developer reported that Docker Compose automatically reads from .env. Since other systems also access the same file for parsing and processing variables, this was creating some confusion resulting in breaking compatibility with other .env utilities. Docker Compose has a "docker-compose.yml" config file used for deploying, combining, and configuring multiple multi-container Docker applications. The .env file is used for putting values in the "docker-compose.yml" file. In the .env file, the default environment variables are specified in the form of key-value pairs. “With the release of 1.24.0, the feature where Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. breaks compatibility with other .env utilities. Although my setup does not use the variables in .env for docker-compose, docker-compose now fails because the .env file does not meet docker-compose's format,” Chan explains. This is not the first time that this issue has been reported. Earlier this year, a user opened an issue on the GitHub repo. He described that after upgrading Compose to 1.24.0-rc1, its automatic parsing of .env file was failing. “I keep export statements in my .env file so I can easily source it in addition to using it as a standard .env. In previous versions of Compose, this worked fine and didn't give me any issues, however with this new update I instead get an error about spaces inside a value,” he explained in his report. As a solution, Chan has proposed, “I propose that you can specify an option to ignore the .env file or specify a different.env file (such as .docker.env) in the docker-compose.yml file so that we can work around projects that are already using the .env file for something else.” This sparked a discussion on Hacker News where users also suggested a few solutions. “This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker,” A user commented. Another user recommended, “You can run docker-compose.yml in any folder in the tree but it only reads the .env from cwd. Just CD into someplace and run docker-compose.” Some users also pointed out the lack of authentication mechanism in Docker Hub. “Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point. As an attacker, I would put a great deal of focus on attacking a company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high,” a user commented. Others recommended to set up a time-based one-time password (TOTP) instead. Check out the reported issue on the GitHub repository. Amazon EKS Windows Container Support is now generally available GKE Sandbox : A gVisor based feature to increase security and isolation in containers 6 signs you need containers  
Read more
  • 0
  • 0
  • 9600

article-image-your-quick-introduction-to-extended-events-in-analysis-services-from-blog-posts-sqlservercentral
Anonymous
01 Jan 2021
9 min read
Save for later

Your Quick Introduction to Extended Events in Analysis Services from Blog Posts - SQLServerCentral

Anonymous
01 Jan 2021
9 min read
The Extended Events (XEvents) feature in SQL Server is a really powerful tool and it is one of my favorites. The tool is so powerful and flexible, it can even be used in SQL Server Analysis Services (SSAS). Furthermore, it is such a cool tool, there is an entire site dedicated to XEvents. Sadly, despite the flexibility and power that comes with XEvents, there isn’t terribly much information about what it can do with SSAS. This article intends to help shed some light on XEvents within SSAS from an internals and introductory point of view – with the hopes of getting more in-depth articles on how to use XEvents with SSAS. Introducing your Heavy Weight Champion of the SQLverse – XEvents With all of the power, might, strength and flexibility of XEvents, it is practically next to nothing in the realm of SSAS. Much of that is due to three factors: 1) lack of a GUI, 2) addiction to Profiler, and 3) inadequate information about XEvents in SSAS. This last reason can be coupled with a sub-reason of “nobody is pushing XEvents in SSAS”. For me, these are all just excuses to remain attached to a bad habit. While it is true that, just like in SQL Server, earlier versions of SSAS did not have a GUI for XEvents, it is no longer valid. As for the inadequate information about the feature, I am hopeful that we can treat that excuse starting with this article. In regards to the Profiler addiction, never fear there is a GUI and the profiler events are accessible via the GUI just the same the new XEvents events are accessible. How do we know this? Well, the GUI tells us just as much, as shown here. In the preceding image, I have two sections highlighted with red. The first of note is evidence that this is the gui for SSAS. Note that the connection box states “Group of Olap servers.” The second area of note is the highlight demonstrating the two types of categories in XEvents for SSAS. These two categories, as you can see, are “profiler” and “purexevent” not to be confused with “Purex® event”. In short, yes Virginia there is an XEvent GUI, and that GUI incorporates your favorite profiler events as well. Let’s See the Nuts and Bolts This article is not about introducing the GUI for XEvents in SSAS. I will get to that in a future article. This article is to introduce you to the stuff behind the scenes. In other words, we want to look at the metadata that helps govern the XEvents feature within the sphere of SSAS. In order to, in my opinion, efficiently explore the underpinnings of XEvents in SSAS, we first need to setup a linked server to make querying the metadata easier. EXEC master.dbo.sp_addlinkedserver @server = N'SSASDIXNEUFLATIN1' --whatever LinkedServer name you desire , @srvproduct=N'MSOLAP' , @provider=N'MSOLAP' , @datasrc=N'SSASServerSSASInstance' --change your data source to an appropriate SSAS instance , @catalog=N'DemoDays' --change your default database go EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'SSASDIXNEUFLATIN1' , @useself=N'False' , @locallogin=NULL , @rmtuser=NULL , @rmtpassword=NULL GO Once the linked server is created, you are primed and ready to start exploring SSAS and the XEvent feature metadata. The first thing to do is familiarize yourself with the system views that drive XEvents. You can do this with the following query. SELECT lq.* FROM OPENQUERY(SSASDIXNEUFLATIN1, 'SELECT * FROM $system.dbschema_tables') as lq WHERE CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%XEVENT%' OR CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%TRACE%' ORDER BY CONVERT(VARCHAR(100),lq.TABLE_NAME); When the preceding query is executed, you will see results similar to the following. In this image you will note that I have two sections highlighted. The first section, in red, is the group of views that is related to the trace/profiler functionality. The second section, in blue, is the group of views that is related the XEvents feature in SSAS. Unfortunately, this does demonstrate that XEvents in SSAS is a bit less mature than what one may expect and definitely shows that it is less mature in SSAS than it is in the SQL Engine. That shortcoming aside, we will use these views to explore further into the world of XEvents in SSAS. Exploring Further Knowing what the group of tables looks like, we have a fair idea of where we need to look next in order to become more familiar with XEvents in SSAS. The tables I would primarily focus on (at least for this article) are: DISCOVER_TRACE_EVENT_CATEGORIES, DISCOVER_XEVENT_OBJECTS, and DISCOVER_XEVENT_PACKAGES. Granted, I will only be using the DISCOVER_XEVENT_PACKAGES view very minimally. From here is where things get to be a little more tricky. I will take advantage of temp tables  and some more openquery trickery to dump the data in order to be able to relate it and use it in an easily consumable format. Before getting into the queries I will use, first a description of the objects I am using. DISCOVER_TRACE_EVENT_CATEGORIES is stored in XML format and is basically a definition document of the Profiler style events. In order to consume it, the XML needs to be parsed and formatted in a better format. DISCOVER_XEVENT_PACKAGES is the object that lets us know what area of SSAS the event is related to and is a very basic attempt at grouping some of the events into common domains. DISCOVER_XEVENT_OBJECTS is where the majority of the action resides for Extended Events. This object defines the different object types (actions, targets, maps, messages, and events – more on that in a separate article). Script Fun Now for the fun in the article! IF OBJECT_ID('tempdb..#SSASXE') IS NOT NULL BEGIN DROP TABLE #SSASXE; END; IF OBJECT_ID('tempdb..#SSASTrace') IS NOT NULL BEGIN DROP TABLE #SSASTrace; END; SELECT CONVERT(VARCHAR(MAX), xo.Name) AS EventName , xo.description AS EventDescription , CASE WHEN xp.description LIKE 'SQL%' THEN 'SSAS XEvent' WHEN xp.description LIKE 'Ext%' THEN 'DLL XEvents' ELSE xp.name END AS PackageName , xp.description AS CategoryDescription --very generic due to it being the package description , NULL AS CategoryType , 'XE Category Unknown' AS EventCategory , 'PureXEvent' AS EventSource , ROW_NUMBER() OVER (ORDER BY CONVERT(VARCHAR(MAX), xo.name)) + 126 AS EventID INTO #SSASXE FROM ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * From $system.Discover_Xevent_Objects') ) xo INNER JOIN ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * FROM $system.DISCOVER_XEVENT_PACKAGES') ) xp ON xo.package_id = xp.id WHERE CONVERT(VARCHAR(MAX), xo.object_type) = 'event' AND xp.ID <> 'AE103B7F-8DA0-4C3B-AC64-589E79D4DD0A' ORDER BY CONVERT(VARCHAR(MAX), xo.[name]); SELECT ec.x.value('(./NAME)[1]', 'VARCHAR(MAX)') AS EventCategory , ec.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS CategoryDescription , REPLACE(d.x.value('(./NAME)[1]', 'VARCHAR(MAX)'), ' ', '') AS EventName , d.x.value('(./ID)[1]', 'INT') AS EventID , d.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS EventDescription , CASE ec.x.value('(./TYPE)[1]', 'INT') WHEN 0 THEN 'Normal' WHEN 1 THEN 'Connection' WHEN 2 THEN 'Error' END AS CategoryType , 'Profiler' AS EventSource INTO #SSASTrace FROM ( SELECT CONVERT(XML, lq.[Data]) FROM OPENQUERY (SSASDIXNEUFLATIN1, 'Select * from $system.Discover_trace_event_categories') lq ) AS evts(event_data) CROSS APPLY event_data.nodes('/EVENTCATEGORY/EVENTLIST/EVENT') AS d(x) CROSS APPLY event_data.nodes('/EVENTCATEGORY') AS ec(x) ORDER BY EventID; SELECT ISNULL(trace.EventCategory, xe.EventCategory) AS EventCategory , ISNULL(trace.CategoryDescription, xe.CategoryDescription) AS CategoryDescription , ISNULL(trace.EventName, xe.EventName) AS EventName , ISNULL(trace.EventID, xe.EventID) AS EventID , ISNULL(trace.EventDescription, xe.EventDescription) AS EventDescription , ISNULL(trace.CategoryType, xe.CategoryType) AS CategoryType , ISNULL(CONVERT(VARCHAR(20), trace.EventSource), xe.EventSource) AS EventSource , xe.PackageName FROM #SSASTrace trace FULL OUTER JOIN #SSASXE xe ON trace.EventName = xe.EventName ORDER BY EventName; Thanks to the level of maturity with XEvents in SSAS, there is some massaging of the data that has to be done so that we can correlate the trace events to the XEvents events. Little things like missing EventIDs in the XEvents events or missing categories and so forth. That’s fine, we are able to work around it and produce results similar to the following. If you compare it to the GUI, you will see that it is somewhat similar and should help bridge the gap between the metadata and the GUI for you. Put a bow on it Extended Events is a power tool for many facets of SQL Server. While it may still be rather immature in the world of SSAS, it still has a great deal of benefit and power to offer. Getting to know XEvents in SSAS can be a crucial skill in improving your Data Superpowers and it is well worth the time spent trying to learn such a cool feature. Interested in learning more about the depth and breadth of Extended Events? Check these out or check out the XE website here. Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the seventh article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Your Quick Introduction to Extended Events in Analysis Services first appeared on SQL RNNR. Related Posts: Extended Events Gets a New Home May 18, 2020 Profiler for Extended Events: Quick Settings March 5, 2018 How To: XEvents as Profiler December 25, 2018 Easy Open Event Log Files June 7, 2019 Azure Data Studio and XEvents November 21, 2018 The post Your Quick Introduction to Extended Events in Analysis Services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 9544
article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 9455

article-image-llvm-8-0-0-releases
Natasha Mathur
22 Mar 2019
3 min read
Save for later

LLVM 8.0.0 releases!

Natasha Mathur
22 Mar 2019
3 min read
LLVM team released LLVM 8.0, earlier this week. LLVM is a collection of tools that help develop compiler front ends and back ends. LLVM is written in C++ and has been designed for compile-time, link-time, run-time, and "idle-time" optimization of programs that are written in arbitrary programming languages. LLVM 8.0 explores known issues, major improvements and other changes in the subprojects of LLVM. There were certain issues in LLVM 8.0.0 that could not be fixed earlier (before this release). For instance, clang is getting miscompiled by trunk GCC, and “asan-dynamic” is not able to work on FreeBSD. Other than the issues, there is a long list of changes that have been made to LLVM 8.0.0. Non-comprehensive changes to LLVM 8.0.0 llvm-cov tool can export lcov trace files with the help of the -format=lcov option of the export command. The add_llvm_loadable_module CMake macro has been deprecated. The add_llvm_library macro with the MODULE argument can now help provide the same functionality. For MinGW, references to data variables that are to be imported from a dll can be now accessed via a stub. This will further allow the linker to convert it to a dllimport if needed. Support has been added for labels as offsets in .reloc directive. Windows support for libFuzzer (x86_64) has also been added. Other Changes LLVM IR:  The Function attribute named speculative_load_hardening has been introduced. This will indicate that Speculative Load Hardening should be enabled for the function body. JIT APIs: ORC (On Request Compilation) JIT APIs will now support concurrent compilation. The existing (non-concurrent) ORC layer classes, as well as the related APIs, have been deprecated. These have been renamed with a “Legacy” prefix (e.g. LegacyIRCompileLayer). All the deprecated classes will be removed in LLVM 9. AArch64 Target: Support has been added for Speculative Load Hardening. Also, initial support added for the Tiny code model, where code and the statically defined symbols should remain within 1MB. MIPS Target: Support forGlobalISel instruction selection framework has been improved. ORC JIT will now offer support for MIPS and MIPS64 architectures. There’s also newly added support for MIPS N32 AB. PowerPC Target: This has now been switched to non-PIC default in LLVM 8.0.0. Darwin support has also been deprecated. Also, Out-of-Order scheduling has been enabled for P9. SystemZ Target: These include various code-gen improvements related to improved auto-vectorization, inlining, as well as the instruction scheduling. Other than these, changes have also been made to X86 target, WebAssembly Target, Nios2 target, and LLDB. For a complete list of changes, check out the official LLVM 8.0.0 release notes. LLVM 7.0.0 released with improved optimization and new tools for monitoring LLVM will be relicensing under Apache 2.0 start of next year LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 9424