Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 9673

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 9663

article-image-low-js-a-node-js-port-for-embedded-systems
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

low.js, a Node.js port for embedded systems

Prasad Ramesh
17 Sep 2018
3 min read
Node.JS is a popular backend widely for web development despite some of its flaws. For embedded systems, now there is low.js, a Node.js port with far lower system requirements. In low.js you can program JavaScript applications by utilizing the full Node.js API. You can run these on regular computers and also on embedded devices, which are based on the $3 ESP32 microcontroller. The JavaScript V8 engine at the center of Node.js is replaced with Duktape. Duktape is an embeddable ECMAScript E5/E5.1 engine with a compact footprint. Some parts of the Node.js system library are rewritten for more compact footprint and use more native code. low.js currently uses under 2 MB of disk space with a minimum requirement of around 1.5 MB of RAM for the ESP32 version. low.js features low.js is good for hobbyists and people interested in electronics. It allows using Node.JS scripts on smaller devices like routers which are based on Linux or uClinux without using much of the resources. This is great for scripting especially if they communicate over the internet. The neonious one is a microcontroller board based on low.js for ESP32, which can be programmed in JavaScript ES 6 with the Node API. It includes Wifi, Ethernet, additional flash and an extra I/O controller. The lower systems requirements in low.js allow you to run it comfortably on the ESP32-WROVER module. The ESP32-WROVER costs under $3 for large orders and is a very cost effective solution for IoT devices requiring a microcontroller and Wifi. low.js for ESP32 also adds the additional benefit of fast software development and maintenance. Specialized software developers are not needed for the microcontroller software. How to install? The community edition of low.js can be run on POSIX based systems including Linux, uClinux, and Mac OS X. It is available on Github and currently ./configure is not present. You might need some programming skills and knowledge to get low.js up and running on your systems. The commands are as follows: git clone https://github.com/neonious/lowjs cd lowjs git submodule update --init --recursive make low.js for ESP32 is the same as the community edition, but adapted for the ESP32 microcontroller. This version is not open source and is pre-flashed on the neonious one. For more information and documentation visit the low.js website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust Node.js announces security updates for all their active release lines for August 2018 Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 9662
Banner background image

article-image-watermelon-db-a-new-relational-database-to-make-your-react-and-react-native-apps-highly-scalable
Bhagyashree R
11 Sep 2018
2 min read
Save for later

Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable

Bhagyashree R
11 Sep 2018
2 min read
Now you can store your data in Watermelon! Yesterday, Nozbe released Watermelon DB v0.6.1-1, a new addition to the database world. It aims to help you build powerful React and React Native apps that scale to large number of records and remain fast. Watermelon architecture is database-agnostic, making it cross-platform. It is a high-level layer for dealing with data, but can be plugged in to any underlying database, depending on platform needs. Why choose Watermelon DB? Watermelon DB is optimized for building React and React Native complex applications. Following are the factors that help in ensuring high speed of applications: It makes your application highly scalable by using lazy loading, which means Watermelon DB loads data only when it is requested. Most queries resolve in less than 1ms, even with 10,000 records, as all querying is done on SQLite database on a separate thread. You can launch your app instantly irrespective of how much data you have. It is supported on iOS, Android, and the web. It is statically typed keeping Flow, a static type checker for JavaScript, in mind. It is fast, asynchronous, multi-threaded, and highly cached. It is designed to be used with a synchronization engine to keep the local database up to date with a remote database. Currently, Watermelon DB is in active development and cannot be used in production. Their roadmap states that, migrations will soon be added to allow the production use of Watermelon DB. Schema migrations is the mechanism by which you can add new tables and columns to the database in a backward-compatible way. To know how you can install it and to try few examples, check out Watermelon DB on GitHub. React Native 0.57 coming soon with new iOS WebViews What’s in the upcoming SQLite 3.25.0 release: windows functions, better query optimizer and more React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more!
Read more
  • 0
  • 0
  • 9561

article-image-rusts-original-creator-graydon-hoare-on-the-current-state-of-system-programming-and-safety
Bhagyashree R
20 Jun 2019
4 min read
Save for later

Rust’s original creator, Graydon Hoare on the current state of system programming and safety

Bhagyashree R
20 Jun 2019
4 min read
Back in July 2010, Graydon Hoare showcased the Rust programming language for the very first time at Mozilla Annual Summit. Rust is an open-source system programming language that was created with speed, memory safety, and parallelism in mind. Looking at Rust’s memory and thread safety guarantees, a supportive community, a quickly evolving toolchain, many major projects are being rewritten in Rust. And, one of the major ones was Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust for rewriting many other key parts of Firefox under Project Quantum. Fastly chose Rust to implement Lucet, its native WebAssembly compiler and runtime. More recently, Facebook also chose Rust to implement its controversial Libra blockchain. As the 9th anniversary of the day when Hoare first presented Rust in front of a large audience is approaching, The New Stack took a very interesting interview with him. In the interview, he talked about the current state of system programming, how safe he considers our current complex systems are, how they can be made safer, and more. Here are the key highlights from the interview: Hoare on a brief history of Rust Hoare started working on Rust as a side-project in 2006. Mozilla, his employer at that time, got interested in the project and provided him a team of engineers to help him in the further development of the language. In 2013, he experienced burnout and decided to step down as a technical lead. After working on some less-time-sensitive projects, he quit Mozilla and worked for the payment network, Stellar. In 2016, he got a call from Apple to work on the Swift programming language. Rust is now being developed by the core teams and an active community of volunteer coders. This programming language that he once described as “spare-time kinda thing” is being used by many developers to create a wide range of new software applications from operating systems to simulation engines for virtual reality. It was also "the most loved programming language" in the Stack Overflow Developer Survey for four years in a row (2016-2019). Hoare was very humble about the hard work and dedication he has put into creating the Rust programming language. When asked to summarize Rust’s history he simply said that “we got lucky”.  He added, “that Mozilla was willing to fund such a project for so long; that Apple, Google, and others had funded so much work on LLVM beforehand that we could leverage; that so many talented people in academia, industry and just milling about on the internet were willing to volunteer to help out.” The current state of system programming and safety Hoare considers the state of system programming language “healthy” as compared to the starting couple of decades in his career. Now, it is far easier to sell a language that is focused on performance and correctness. We are seeing more good languages coming into the market because of the increasing interaction between academia and industry. When asked about safety, Hoare believes that though we are slowly taking steps towards better safety, the overall situation is not getting better. He attributes building a number of new complex computing systems is making it worse. He said, “complexity beyond comprehension means we often can’t even define safety, much less build mechanisms that enforce it.” Another reason according to him is the huge number of vulnerable software present in the field that can be exploited anytime by a bad actor. For instance, on Tuesday, a zero-day vulnerability was fixed in Firefox that was being “exploited in the wild” by attackers. “Like much of the legacy of the 20th century, there’s just a tremendous mess in software that’s going to take generations to clean up, assuming humanity even survives that long,” he adds. How system programming can be made safer Hoare designed Rust with safety in mind. Its rich type system and ownership model ensures memory and thread safety. However, he suggests that we can do a lot better when it comes to safety in system programming. He listed a bunch of new improvements that we can implement, “information flow control systems, effect systems, refinement types, liquid types, transaction systems, consistency systems, session types, unit checking, verified compilers and linkers, dependent types.” Hoare believes that there are already many features suggested by academia. The main challenge for us is to implement these features “in a balanced, niche-adapted language that’s palatable enough to industrial programmers to be adopted and used.” You can read Hoare’s full interview on The New Stack. Rust 1.35.0 released Rust shares roadmap for 2019 Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more
Read more
  • 0
  • 0
  • 9542

article-image-exploring-the%e2%80%afnew%e2%80%af-net-multi-platform-app-ui%e2%80%afmaui%e2%80%afwith-the-experts
Expert Network
25 May 2021
8 min read
Save for later

Exploring the new .NET Multi-Platform App UI (MAUI) with the Experts

Expert Network
25 May 2021
8 min read
During the 2020 edition of Build, Microsoft revealed its plan for a multi-platform framework called .NET MAUI. This latest framework appears to be an upgraded and transformed version of  Xamarin.Forms, enabling developers to build robust device applications and provide native features for Windows, Android, macOS, and iOS.   Microsoft has recently devoted efforts to unifying the .NET platform, in which MAUI plays a vital role. The framework helps developers access the native API (Application Programming Interface) for all modern operating systems by offering a single codebase with built-in resources. It paves the way for the development of multi-platform applications under the banner of one exclusive project structure with the flexibility of incorporating different source code files or resources for different platforms when needed.   .NET MAUI would bring the project structure to a sole source with single-click deployment for as many platforms as needed. Some of the prominent features in .NET MAUI will be XAML and Model-View-View-Model (MVVM). It will enable the developers to implement the Model-View-Update (MVU) pattern.  Microsoft also intends to offer ‘Try-N-Convert’ support and migration guides to help developers carry a seamless transition of existing apps to .NET MAUI. The performance continues to remain as the focal point in MAUI and the faster algorithms, advanced compilers, and advanced SDK Style project tooling experience.  Let us hear what our experts have to say about MAUI, a framework that holds the potential to streamline cross-platform app development. Which technology - native or cross-platform app development, is better and more prevalent? Gabriel: I always suggest that the best platform is the one that fits best with your team. I mean, if you have a C# team, for sure .NET development (Xamarin, MAUI, and so on) will be better. On the other hand, if you have a JavaScript / Typescript team, we do have several other options for native/cross-platform development.   Francesco: In general, saying “better” is quite difficult. The right choice always depends on the constraints one has, but I think that for most applications “cross-platform” is the only acceptable choice. Mobile and desktop applications have noticeably short lifecycles and most of them have lower budgets than server enterprise applications. Often, they are just one of the several ways to interact with an enterprise application, or with complex websites.  Therefore, both budget and time constraints make developing and maintaining several native applications unrealistic. However, no matter how smart and optimized cross-platform frameworks are, native applications always have better performance and take full advantage of the specific features of each device. So, for sure, there are critical applications that can be implemented just like natives.  Valerio: Both approaches have pros and cons: native mobile apps usually have higher performances and seamless user experience, thus being ideal for end-users and/or product owners with lofty expectations in terms of UI/UX. However, building them nowadays can be costly and time-consuming because you need to have a strong dev team (or multiple teams) that can handle both iOS, Android and Windows/Linux Desktop PCs. Furthermore, there is a possibility of having different codebases which can be quite cumbersome to maintain, upgrade and keep in synchronization. Cross-platform development can mitigate these downsides. However, everything that you will save in terms of development cost, time and maintainability will often be paid in terms of performance, limited functionalities and limited UI/UX; not to mention the steep learning curve that multi-platform development frameworks tend to have due to their elevated level of abstraction.   What are the prime differences between MAUI and the Uno Platform, if any?   Gabriel: I would also say that, considering MAUI has Xamarin.Forms, it will easily enable compatibility with different Operating Systems.  Francesco: Uno's default option is to style an application the same on all platforms, but gives an opportunity to make the application look and feel like a native app; whereas MAUI takes more advantage of native features. In a few words, MAUI applications look more like native applications. Uno also targets WASM in browsers, while MAUI does not target it, but somehow proposes Blazor. Maybe Blazor will still be another choice to unify mobile, desktop, and Web development, but not in the 6.0 .NET release.  Valerio: Both MAUI and Uno Platform try to achieve a similar goal, but they are based upon two different architectural approaches: MAUI, like Xamarin.Forms, will have their own abstraction layer above the native APIs, while Uno builds UWP interfaces upon them. Again, both approaches do have their pros and cons: abstraction layers can be costly in terms of performance (especially on mobile devices, since it will need to take care of the most layout-related tasks) but this will be useful to keep a small and versatile codebase.  Would MAUI be able to fulfill cross-platform app development requirements right from its launch, or will it take a few developments post-release for it to entirely meet its purpose?   Gabriel: The mechanism presented in this kind of technology will let us guarantee cross-platform even in cases where there are differences. So, my answer would be yes.  Francesco: Looking behind the story of all Microsoft platforms, I would say it is very unlikely that MAUI will fulfill all cross-platform app development requirements right from the time it is launched. It might be 80-90 percept effective and cater to the development needs. For MAUI to become a full-fledged platform equipped with all the tools for a cross-platform app, it might take another year.   Valerio: I hope so! Realistically speaking, I think this will be a tough task: I would not expect good cross-platform app compatibility right from the start, especially in terms of UI/UX. Such ambitious developments improvise and are gradually made perfect with accurate and relevant feedback that comes from the real users and the community.  How much time will it take for Microsoft to release MAUI?   Gabriel: Microsoft is continuously delivering versions of their software environments. The question is a little bit more complex because as a software developer you cannot only think about when Microsoft will release MAUI. You need to consider when it will be stable and with an LTS Version available. I believe this will take a little bit longer than the roadmap presented by Microsoft.  Francesco: According to the planned timeline, MAUI should be launched in conjunction with the November 2021 .NET 6 release. This timeline should be respected, but in the worst-case scenario, the release will be played and arrive a few months later. This is similar to what had happened with Blazor and the 3.1 .NET release.  Valerio: The MAUI official timeline sounds rather optimistic, but Microsoft seems to be investing a lot in that project and they have already managed to successfully deliver big releases without excessive delays (think of .NET 5): I think they will try their best to launch MAUI together with the first .NET 6 final release since it would be ideal in terms of marketing and could help to bring some additional early adopters.  Summary  The launch of Multi-Platform App UI (MAUI) will undoubtedly revolutionize the way developers build device applications. Developers can look forward to smooth and faster deployment and whether MAUI will offer platform-specific projects or it would be a shared code system, will eventually be revealed. It is too soon to estimate the extent of MAUI’s impact, but it will surely be worth the wait and now with MAUI moving into the dotnet Github, there is excitement to see how MAUI unfolds across the development platforms and how the communities receive and align with it. With every upcoming preview of .NET 6 we can expect numerous additions to the capabilities of .NET MAUI. For now, the developers are looking forward to the “dotnet new” experience.   About the authors  Gabriel Baptista is a software architect who leads technical teams across a diverse range of projects for retail and industry, using a significant array of Microsoft products. He is a specialist in Azure Platform-as-a-Service (PaaS) and a computing professor who has published many papers and teaches various subjects related to software engineering, development, and architecture. He is also a speaker on Channel 9, one of the most prestigious and active community websites for the .NET stack.  Francesco Abbruzzese has built the tool - MVC Controls Toolkit. He has also contributed to the diffusion and evangelization of the Microsoft web stack since the first version of ASP.NET MVC through tutorials, articles, and tools. He writes about .NET and client-side technologies on his blog, Dot Net Programming, and in various online magazines. His company, Mvcct Team, implements and offers web applications, AI software, SAS products, tools, and services for web technologies associated with the Microsoft stack.  Gabriel and Francesco are authors of the book Software Architecture with C# 9 and .NET 5, 2nd Edition. Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He is the founder and owner of Ryadel. Valerio De Sanctis is the author of ASP.NET Core 5 and Angular, 4th Edition
Read more
  • 0
  • 0
  • 9412
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-meet-yuzu-an-experimental-emulator-for-the-nintendo-switch
Sugandha Lahoti
17 Jul 2018
3 min read
Save for later

Meet yuzu – an experimental emulator for the Nintendo Switch

Sugandha Lahoti
17 Jul 2018
3 min read
The makers of Citra, an emulator for the Nintendo 3DS, have released a new emulator called yuzu. This emulator is made for the Nintendo Switch, which is the 7th major video game console from Nintendo. The journey so far for yuzu Yuzu was initiated as an experimental setup by Citra’s lead developer bunnei after he saw that there were signs of the Switch’s operating system being based on the 3DS’s operating system. yuzu has the same core code as Citra and much of the same OS High-Level Emulation (HLE). The core emulation and memory management of yuzu are based on Citra, albeit modified to work with 64-bit addresses. It also has a loader for the Switch games and Unicorn integration for CPU emulation. Yuzu uses Reverse Engineering process to figure out how games work, and how the Switch GPU works. Switch’s GPU is more advanced than 3DS’ used in Citra and poses multiple challenges to reverse engineer it. However, the RE process of yuzu is essentially the same as Citra. Most of their RE and other development is being done in a trial-and-error manner. OS emulation The Switch’s OS is based Nintendo 3DS’s OS. So the developers used a large part of Citra’s OS HLE code for yuzu OS. The loader and file system service was reused from Citra and modified to support Switch game dump files. The Kernel OS threading, scheduling, and synchronization fixes for yuzu were also ported from Citra’s OS implementation. The save data functionality, which allowed games to read and write files to the save data directory was also taken from 3DS. Switchbrew helped them create libnx, a userland library to write homebrew apps for the Nintendo Switch. (Homebrew is a popular term used for applications that are created and executed on a video game console by hackers, programmers, developers, and consumers.) The Switch IPC (Inter-process communication) process is much more robust and complicated than the 3DS’s. Their system has different command modes, a typical IPC request response, and a Domain to efficiently conduct multiple service calls. Yuzu uses the Nvidia services to configure the video driver to get the graphics output. However, Nintendo re-purposed the Android graphics stack and used it in the Switch for rendering. And so yuzu developers had to implement this even to get homebrew applications to display graphics. The Next Steps Being at a nascent stage, yuzu still has a long way to go. The developers still have to add HID (user input support) such as support for all 9 controllers, rumble, LEDs, layouts etc. Currently, the Audio HLE is in progress, but they still have to implement audio playback. Audio playback, if implemented properly, would be a major breakthrough as most complicated games often hang or go into a deadlock because of this issue. They are also working on resolving minor fixes to help them boot further in games like Super Mario Odyssey, 1-2-Switch, and The Binding of Issac. Be sure to read the entire progress report on the yuzu blog. AI for game developers: 7 ways AI can take your game to the next level AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior Unity 2018.2: Unity release for this year 2nd time in a row!
Read more
  • 0
  • 0
  • 9256

article-image-scrivito-launches-serverless-javascript-cms
Kunal Chaudhari
17 Apr 2018
2 min read
Save for later

Scrivito launches serverless JavaScript CMS

Kunal Chaudhari
17 Apr 2018
2 min read
Scrivito, a SaaS-based Content Management Service, launched a new breed of cloud-based serverless JavaScript CMS which is specifically targeted towards medium to large sized businesses. While the world is shifting to cutting-edge cloud technology, web CMS platforms are still stuck in the past. Thomas Witt, Co-Founder, and CTO of Scrivito said that “We’re at a tipping point. Agencies and dev teams that stick with Wordpress and the like are doomed to be overtaken by the inevitable shift to serverless computing and JavaScript development.” Scrivito checks the boxes for key trending tech innovations in the web development space. Serverless? Yes. Cloud native? Yes. So what’s unique about this cutting-edge content management interface and how exactly does it differentiate itself from the other traditional CMS? Scrivito requires zero maintenance thanks to the cloud This is the most unique feature of Scrivito. Since it is a cloud-based service, it allows developers to spin up a CMS instance without having to re-install anything or reconfigure databases, search engine indexing, backups or metadata. This leads to no downtime, no software patches, and minimal maintenance efforts. Component reusability powered by ReactJS Scrivito is powered by Facebook’s popular frontend framework-React. Thanks to its reusable UI components and its flexibility, developers can create complex and interactive functionalities such as configurators or multi-page forms with ease. Not only built for developers, it also makes it easier for agencies and marketing teams to build, edit and manage secure, reliable and cost-effective sites, microsites, and landing pages. Scrivito is extendable Scrivito is easily extendable because it doesn’t require any infrastructure. Developers and editors can create their own widgets and data structures on the fly. Due to its unique working copies technology, it brings version control technologies from software development to the CMS world, thus eliminating the need for a staging server and allowing parallel editing of content across teams. Plus, its API-driven approach provides the benefits of a serverless and a headless CMS together with WYSIWYG editing in a single solution. Scrivito has certainly ignited a revolution in the web development space by introducing serverless technologies to CMS applications. It is available at different price points for personal and enterprise users. To know more about other features and pricing options, check out the project's official webpage.
Read more
  • 0
  • 0
  • 9174

article-image-cloudflare-finally-launches-warp-and-warp-plus-after-a-delay-of-more-than-five-months
Vincy Davis
27 Sep 2019
5 min read
Save for later

Cloudflare finally launches Warp and Warp Plus after a delay of more than five months

Vincy Davis
27 Sep 2019
5 min read
More than five months after announcing Warp, Cloudflare has finally made it available to the general public, yesterday. With two million people on the waitlist to try Warp, the Cloudflare team says that it took them harder than they thought to build a next-generation service to secure consumer mobile connections, without compromising on speed and power usage. Along with Warp, Cloudflare is also launching Warp Plus. Warp is a free VPN to the 1.1.1.1 DNS resolver app which will speed up mobile data using the Cloudflare network to resolve DNS queries at a faster pace. It also comes with end-to-end encryption and does not require users to install a root certificate to observe encrypted internet traffic. It is built around a UDP-based protocol that is optimized for the mobile internet and offers excellent performance and reliability. Why Cloudflare delayed the Warp release? A few days before Cloudflare announced Warp on April 1st, Apple released its new version iOS 12.2 with significant changes in its underlying network stack implementation. This made the Warp network unstable thus making the Cloudflare team arrange for workarounds in their networking code, which took more time. Cloudflare adds, “We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we'd anticipated.” As the internet is made up of diverse network components, the Cloudflare team found it difficult to include all the diversity of mobile carriers, mobile operating systems, and mobile device models in their network. The Cloudflare team also found it testing to include users’ diverse network settings in their network. Warp uses a technology called Anycast to route user traffic to the Cloudflare network, however, it moves the users’ data between entire data centers, which made the Warp functioning complex.  To overcome all these barriers, the Cloudflare team has now changed its approach by focussing more on iOS. The team has also solidified the shared underpinnings of the app to ensure that it would even work with future network stack upgrades. The team has also tested Warp with network-based users to discover as many corner cases as possible. Thus, the Cloudflare team has successfully invented new technologies to keep the session state stable even with multiple mobile networks. Cloudflare introduces Warp Plus - an unlimited version of Warp Along with Warp, the Cloudflare team has also launched Warp Plus, an unlimited version of WARP for a monthly subscription fee. Warp Plus is faster than Warp and uses Cloudflare’s Argo Smart Routing to achieve a higher speed than Warp. The official blog post states, “Routing your traffic over our network often costs us more than if we release it directly to the internet.” To cover these costs, Warp Plus will charge a monthly fee of $4.99/month or less, depending on the user location. The Cloudflare team also added that they will be launching a test tool within the 1.1.1.1 app in a few weeks to make users “see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus.” Read Also: Cloudflare plans to go public; files S-1 with the SEC  To know more details about Warp Plus, read the technical post by Cloudflare team. Privacy features offered by Warp and Warp Plus The 1.1.1.1 DNS resolver app provides strong privacy protections such as all the debug logs will be kept only long enough to ensure the security of the service. Also, Cloudflare will only retain the limited transaction data for legitimate operational and research purposes.  Warp will not only maintain the 1.1.1.1 DNS protection layers but will also ensure: User’s-identifiable log data will be written to disk The user’s browsing data will not be sold for advertising purposes Warp will not demand any personal information (name, phone number, or email address) to use Warp or Warp Plus Outside editors will regularly regulate Warp’s functioning The Cloudflare team has also notified users that the newly available Warp will have bugs present in them. The blog post also specifies that the most popular bug currently in Warp is due to traffic misroute, which is making the Warp function slower than the speed of non-Warp mobile internet.  Image Source: Cloudflare blog The team has made it easier for users to report bugs as they have to just click on the little bug icon near the top of the screen on the 1.1.1.1 app or shake their phone with the app open and send a bug report to Cloudflare. Visit the Cloudflare blog for more information on Warp and Warp Plus. Facebook will no longer involve third-party fact-checkers to review the political content on their platform GNOME Foundation’s Shotwell photo manager faces a patent infringement lawsuit from Rothschild Patent Imaging A zero-day pre-auth vulnerability is currently being exploited in vBulletin, reports an anonymous researcher
Read more
  • 0
  • 0
  • 9137

article-image-now-there-is-a-deepfake-that-can-animate-your-face-with-just-your-voice-and-a-picture-using-temporal-gans
Savia Lobo
24 Jun 2019
6 min read
Save for later

Now there is a Deepfake that can animate your face with just your voice and a picture using temporal GANs

Savia Lobo
24 Jun 2019
6 min read
Last week, researchers from the Imperial College in London and Samsung’s AI research center in the UK revealed how deepfakes can be used to generate a singing or talking video portrait by from a still image of a person and an audio clip containing speech. In their paper titled, “Realistic Speech-Driven Facial Animation with GANs”, the researchers have used temporal GAN which uses 3 discriminators focused on achieving detailed frames, audio-visual synchronization, and realistic expressions. Source: arxiv.org “The generated videos are evaluated based on sharpness, reconstruction quality, lip-reading accuracy, synchronization as well as their ability to generate natural blinks”, the researchers mention in their paper. https://youtu.be/9Ctm4rTdVTU Researchers used the GRID, TCD TIMIT, CREMA-D and LRW datasets. The GRID dataset has 33 speakers each uttering 1000 short phrases, containing 6 words randomly chosen from a limited dictionary. The TCD TIMIT dataset has 59 speakers uttering approximately 100 phonetically rich sentences each. The CREMA-D dataset includes 91 actors coming from a variety of different age groups and races utter 12 sentences. Each sentence is acted out by the actors multiple times for different emotions and intensities. Researchers have used the recommended data split for the TCD TIMIT dataset but exclude some of the test speakers and use them as a validation set. Researchers performed data augmentation on the training set by mirroring the videos. Metrics used to assess the quality of generated videos Researchers evaluated the videos using traditional image reconstruction and sharpness metrics. These metrics can be used to determine frame quality; however, they fail to reflect other important aspects of the video such as audio-visual synchrony and the realism of facial expressions. Hence they have also proposed alternative methods capable of capturing these aspects of the generated videos. Reconstruction Metrics This method uses common reconstruction metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index to evaluate the generated videos. However, the researchers reveal that “reconstruction metrics will penalize videos for any facial expression that does not match those in the ground truth videos”. Sharpness Metrics The frame sharpness is evaluated using the cumulative probability blur detection (CPBD) measure, which determines blur based on the presence of edges in the image. For this metric as well as for the reconstruction metrics larger values imply better quality. Content Metrics The content of the videos is evaluated based on how well the video captures the identity of the target and on the accuracy of the spoken words. The researchers have verified the identity of the speaker using the average content distance (ACD), which measures the average Euclidean distance of the still image representation, obtained using OpenFace from the representation of the generated frames. The accuracy of the spoken message is measured using the word error rate (WER) achieved by a pre-trained lip-reading model. They used the LipNet model which exceeds the performance of human lip-readers on the GRID dataset. For both content metrics, lower values indicate better accuracy. Audio-Visual Synchrony Metrics Synchrony is quantified in Joon Son Chung and Andrew Zisserman’s “Out of time: automated lip sync in the wild”. In this work Chung et al. propose the SyncNet network which calculates the euclidean distance between the audio and video encodings on small (0.2 second) sections of the video. The audio-visual offset is obtained by using a sliding window approach to find where the distance is minimized. The offset is measured in frames and is positive when the audio leads the video. For audio and video pairs that correspond to the same content, the distance will increase on either side of the point where the minimum distance occurs. However, for uncorrelated audio and video, the distance is expected to be stable. Based on this fluctuation they further propose using the difference between the minimum and the median of the Euclidean distances as an audio-visual (AV) confidence score which determines the audio-visual correlation. Higher scores indicate a stronger correlation, whereas confidence scores smaller than 0.5 indicate that Limitations and the possible misuse of Deepfake The limitation of this new Deepfake method is that it only works for well-aligned frontal faces. “the natural progression of this work will be to produce videos that simulate in wild conditions”, the researchers mention. While this research appears the next milestone for GANs in generating videos from still photos, it also may be misused for spreading misinformation by morphing video content from any still photograph. Recently, at the House Intelligence Committee hearing, Top House Democrat Rep. Adam Schiff (D-CA) issued a warning on Thursday that deepfake videos could have a disastrous effect on the 2020 election cycle. “Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections,” Schiff said. “By then it will be too late.” The hearing came only a few weeks after a real-life instance of a doctored political video, where the footage was edited to make House Speaker Nancy Pelosi appear drunk, that spread widely on social media. “Every platform responded to the video differently, with YouTube removing the content, Facebook leaving it up while directing users to coverage debunking it, and Twitter simply letting it stand,” The Verge reports. YouTube took the video down; however, Facebook refused to remove the video. Neil Potts, Public Policy Director of Facebook had stated that if someone posted a doctored video of Zuckerberg, like one of Pelosi, it would stay up. After this, on June 11, a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. https://twitter.com/motherboard/status/1138536366969688064 Omer Ben-Ami, one of the founders of Canny says that the video is made to educate the public on the uses of AI and to make them realize the potential of AI. Though Zuckerberg’s video was to retain the educational value of Deepfakes, this shows the potential of how it can be misused. Although some users say it has interesting applications, many are concerned that the chances of misusing this software are more than putting it into the right use. https://twitter.com/timkmak/status/1141784420090863616 A user commented on Reddit, “It has some really cool applications though. For example in your favorite voice acted video game, if all of the characters lips would be in sync with the vocals no matter what language you are playing the game in, without spending tons of money having animators animate the characters for every vocalization.” To know more about this new Deepfake, read the official research paper. Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Machine generated videos like Deepfakes – Trick or Treat?
Read more
  • 0
  • 0
  • 9066
article-image-llvm-8-0-0-releases
Natasha Mathur
22 Mar 2019
3 min read
Save for later

LLVM 8.0.0 releases!

Natasha Mathur
22 Mar 2019
3 min read
LLVM team released LLVM 8.0, earlier this week. LLVM is a collection of tools that help develop compiler front ends and back ends. LLVM is written in C++ and has been designed for compile-time, link-time, run-time, and "idle-time" optimization of programs that are written in arbitrary programming languages. LLVM 8.0 explores known issues, major improvements and other changes in the subprojects of LLVM. There were certain issues in LLVM 8.0.0 that could not be fixed earlier (before this release). For instance, clang is getting miscompiled by trunk GCC, and “asan-dynamic” is not able to work on FreeBSD. Other than the issues, there is a long list of changes that have been made to LLVM 8.0.0. Non-comprehensive changes to LLVM 8.0.0 llvm-cov tool can export lcov trace files with the help of the -format=lcov option of the export command. The add_llvm_loadable_module CMake macro has been deprecated. The add_llvm_library macro with the MODULE argument can now help provide the same functionality. For MinGW, references to data variables that are to be imported from a dll can be now accessed via a stub. This will further allow the linker to convert it to a dllimport if needed. Support has been added for labels as offsets in .reloc directive. Windows support for libFuzzer (x86_64) has also been added. Other Changes LLVM IR:  The Function attribute named speculative_load_hardening has been introduced. This will indicate that Speculative Load Hardening should be enabled for the function body. JIT APIs: ORC (On Request Compilation) JIT APIs will now support concurrent compilation. The existing (non-concurrent) ORC layer classes, as well as the related APIs, have been deprecated. These have been renamed with a “Legacy” prefix (e.g. LegacyIRCompileLayer). All the deprecated classes will be removed in LLVM 9. AArch64 Target: Support has been added for Speculative Load Hardening. Also, initial support added for the Tiny code model, where code and the statically defined symbols should remain within 1MB. MIPS Target: Support forGlobalISel instruction selection framework has been improved. ORC JIT will now offer support for MIPS and MIPS64 architectures. There’s also newly added support for MIPS N32 AB. PowerPC Target: This has now been switched to non-PIC default in LLVM 8.0.0. Darwin support has also been deprecated. Also, Out-of-Order scheduling has been enabled for P9. SystemZ Target: These include various code-gen improvements related to improved auto-vectorization, inlining, as well as the instruction scheduling. Other than these, changes have also been made to X86 target, WebAssembly Target, Nios2 target, and LLDB. For a complete list of changes, check out the official LLVM 8.0.0 release notes. LLVM 7.0.0 released with improved optimization and new tools for monitoring LLVM will be relicensing under Apache 2.0 start of next year LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 8997

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 8979

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 8955
article-image-firewall-ports-you-need-to-open-for-availability-groups-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
6 min read
Save for later

Firewall Ports You Need to Open for Availability Groups from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
6 min read
Something that never ceases to amaze me is the frequent request for help on figuring out what ports are needed for Availability Groups in SQL Server to function properly. These requests come from a multitude of reasons such as a new AG implementation, to a migration of an existing AG to a different VLAN. Whenever these requests come in, it is a good thing in my opinion. Why? Well, that tells me that the network team is trying to instantiate a more secure operating environment by having segregated VLANs and firewalls between the VLANs. This is always preferable to having firewall rules of ANY/ANY (I correlate that kind of firewall rule to granting “CONTROL” to the public server role in SQL Server). So What Ports are Needed Anyway? If you are of the mindset that a firewall rule of ANY/ANY is a good thing or if your Availability Group is entirely within the same VLAN, then you may not need to read any further. Unless, of course, if you have a software firewall (such as Windows Defender / Firewall) running on your servers. If you are in the category where you do need to figure out which ports are necessary, then this article will provide you with a very good starting point. Windows Server Clustering – TCP/UDP Port Description TCP/UDP 53 User & Computer Authentication [DNS] TCP/UDP 88 User & Computer Authentication [Kerberos] UDP 123 Windows Time [NTP] TCP 135 Cluster DCOM Traffic [RPC, EPM] UDP 137 User & Computer Authentication [NetLogon, NetBIOS , Cluster Admin, Fileshare Witness] UDP 138 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] TCP 139 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] UDP 161 SNMP TCP/UDP 162 SNMP Traps TCP/UDP 389 User & Computer Authentication [LDAP] TCP/UDP 445 User & Computer Authentication [SMB, SMB2, CIFS, Fileshare Witness] TCP/UDP 464 User & Computer Authentication [Kerberos Change/Set Password] TCP 636 User & Computer Authentication [LDAP SSL] TCP 3268 Microsoft Global Catalog TCP 3269 Microsoft Global Catalog [SSL] TCP/UDP 3343 Cluster Network Communication TCP 5985 WinRM 2.0 [Remote PowerShell] TCP 5986 WinRM 2.0 HTTPS [Remote PowerShell SECURE] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}RPC and DCOM ] * SQL Server – TCP/UDP Port Description TCP 1433 SQL Server/Availability Group Listener [Default Port {CAN BE CHANGED}] TCP/UDP 1434 SQL Server Browser UDP 2382 SQL Server Analysis Services Browser TCP 2383 SQL Server Analysis Services Listener TCP 5022 SQL Server DBM/AG Endpoint [Default Port {CAN BE CHANGED}] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}] *Randomly allocated UDP port number between 49152 and 65535 So I have a List of Ports, what now? Knowing is half the power, and with great knowledge comes great responsibility – or something like that. In reality, now that know what is needed, the next step is to go out and validate that the ports are open and working. One of the easier ways to do this is with PowerShell. $RemoteServers = "Server1","Server2" $InbndServer = "HomeServer" $TCPPorts = "53", "88", "135", "139", "162", "389", "445", "464", "636", "3268", "3269", "3343", "5985", "5986", "49152", "65535", "1433", "1434", "2383", "5022" $UDPPorts = "53", "88", "123", "137", "138", "161", "162", "389", "445", "464", "3343", "49152", "65535", "1434", "2382" $TCPResults = @() $TCPResults = Invoke-Command $RemoteServers {param($InbndServer,$TCPPorts) $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $TCPPorts){ $PortCheck = (TNC -Port $p -ComputerName $InbndServer ).TcpTestSucceeded If($PortCheck -notmatch "True|False"){$PortCheck = "ERROR"} $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } $Object } -ArgumentList $InbndServer,$TCPPorts | select * -ExcludeProperty runspaceid, pscomputername $TCPResults | Out-GridView -Title "AG and WFC TCP Port Test Results" $TCPResults | Format-Table * #-AutoSize $UDPResults = Invoke-Command $RemoteServers {param($InbndServer,$UDPPorts) $test = New-Object System.Net.Sockets.UdpClient; $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $UDPPorts){ Try { $test.Connect($InbndServer, $P); $PortCheck = "TRUE"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } Catch { $PortCheck = "ERROR"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } } $Object } -ArgumentList $InbndServer,$UDPPorts | select * -ExcludeProperty runspaceid, pscomputername $UDPResults | Out-GridView -Title "AG and WFC UDP Port Test Results" $UDPResults | Format-Table * #-AutoSize This script will test all of the related TCP and UDP ports that are required to ensure your Windows Failover Cluster and SQL Server Availability Group works flawlessly. If you execute the script, you will see results similar to the following. Data Driven Results In the preceding image, I have combined each of the Gridview output windows into a single screenshot. Highlighted in Red is the result set for the TCP tests, and in Blue is the window for the test results for the UDP ports. With this script, I can take definitive results all in one screen shot and share them with the network admin to try and resolve any port deficiencies. This is just a small data driven tool that can help ensure quicker resolution when trying to ensure the appropriate ports are open between servers. A quicker resolution in opening the appropriate ports means a quicker resolution to the project and all that much quicker you can move on to other tasks to show more value! Put a bow on it This article has demonstrated a meaningful and efficient method to (along with the valuable documentation) test and validate the necessary firewall ports for Availability Groups (AG) and Windows Failover Clustering. With the script provided in this article, you can provide quick and value added service to your project along with providing valuable documentation of what is truly needed to ensure proper AG functionality. Interested in learning about some additional deep technical information? Check out these articles! Here is a blast from the past that is interesting and somewhat related to SQL Server ports. Check it out here. This is the sixth article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Firewall Ports You Need to Open for Availability Groups first appeared on SQL RNNR. Related Posts: Here is an Easy Fix for SQL Service Startup Issues… December 28, 2020 Connect To SQL Server - Back to Basics March 27, 2019 SQL Server Extended Availability Groups April 1, 2018 Single User Mode - Back to Basics May 31, 2018 Lost that SQL Server Access? May 30, 2018 The post Firewall Ports You Need to Open for Availability Groups appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 8799

article-image-your-quick-introduction-to-extended-events-in-analysis-services-from-blog-posts-sqlservercentral
Anonymous
01 Jan 2021
9 min read
Save for later

Your Quick Introduction to Extended Events in Analysis Services from Blog Posts - SQLServerCentral

Anonymous
01 Jan 2021
9 min read
The Extended Events (XEvents) feature in SQL Server is a really powerful tool and it is one of my favorites. The tool is so powerful and flexible, it can even be used in SQL Server Analysis Services (SSAS). Furthermore, it is such a cool tool, there is an entire site dedicated to XEvents. Sadly, despite the flexibility and power that comes with XEvents, there isn’t terribly much information about what it can do with SSAS. This article intends to help shed some light on XEvents within SSAS from an internals and introductory point of view – with the hopes of getting more in-depth articles on how to use XEvents with SSAS. Introducing your Heavy Weight Champion of the SQLverse – XEvents With all of the power, might, strength and flexibility of XEvents, it is practically next to nothing in the realm of SSAS. Much of that is due to three factors: 1) lack of a GUI, 2) addiction to Profiler, and 3) inadequate information about XEvents in SSAS. This last reason can be coupled with a sub-reason of “nobody is pushing XEvents in SSAS”. For me, these are all just excuses to remain attached to a bad habit. While it is true that, just like in SQL Server, earlier versions of SSAS did not have a GUI for XEvents, it is no longer valid. As for the inadequate information about the feature, I am hopeful that we can treat that excuse starting with this article. In regards to the Profiler addiction, never fear there is a GUI and the profiler events are accessible via the GUI just the same the new XEvents events are accessible. How do we know this? Well, the GUI tells us just as much, as shown here. In the preceding image, I have two sections highlighted with red. The first of note is evidence that this is the gui for SSAS. Note that the connection box states “Group of Olap servers.” The second area of note is the highlight demonstrating the two types of categories in XEvents for SSAS. These two categories, as you can see, are “profiler” and “purexevent” not to be confused with “Purex® event”. In short, yes Virginia there is an XEvent GUI, and that GUI incorporates your favorite profiler events as well. Let’s See the Nuts and Bolts This article is not about introducing the GUI for XEvents in SSAS. I will get to that in a future article. This article is to introduce you to the stuff behind the scenes. In other words, we want to look at the metadata that helps govern the XEvents feature within the sphere of SSAS. In order to, in my opinion, efficiently explore the underpinnings of XEvents in SSAS, we first need to setup a linked server to make querying the metadata easier. EXEC master.dbo.sp_addlinkedserver @server = N'SSASDIXNEUFLATIN1' --whatever LinkedServer name you desire , @srvproduct=N'MSOLAP' , @provider=N'MSOLAP' , @datasrc=N'SSASServerSSASInstance' --change your data source to an appropriate SSAS instance , @catalog=N'DemoDays' --change your default database go EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'SSASDIXNEUFLATIN1' , @useself=N'False' , @locallogin=NULL , @rmtuser=NULL , @rmtpassword=NULL GO Once the linked server is created, you are primed and ready to start exploring SSAS and the XEvent feature metadata. The first thing to do is familiarize yourself with the system views that drive XEvents. You can do this with the following query. SELECT lq.* FROM OPENQUERY(SSASDIXNEUFLATIN1, 'SELECT * FROM $system.dbschema_tables') as lq WHERE CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%XEVENT%' OR CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%TRACE%' ORDER BY CONVERT(VARCHAR(100),lq.TABLE_NAME); When the preceding query is executed, you will see results similar to the following. In this image you will note that I have two sections highlighted. The first section, in red, is the group of views that is related to the trace/profiler functionality. The second section, in blue, is the group of views that is related the XEvents feature in SSAS. Unfortunately, this does demonstrate that XEvents in SSAS is a bit less mature than what one may expect and definitely shows that it is less mature in SSAS than it is in the SQL Engine. That shortcoming aside, we will use these views to explore further into the world of XEvents in SSAS. Exploring Further Knowing what the group of tables looks like, we have a fair idea of where we need to look next in order to become more familiar with XEvents in SSAS. The tables I would primarily focus on (at least for this article) are: DISCOVER_TRACE_EVENT_CATEGORIES, DISCOVER_XEVENT_OBJECTS, and DISCOVER_XEVENT_PACKAGES. Granted, I will only be using the DISCOVER_XEVENT_PACKAGES view very minimally. From here is where things get to be a little more tricky. I will take advantage of temp tables  and some more openquery trickery to dump the data in order to be able to relate it and use it in an easily consumable format. Before getting into the queries I will use, first a description of the objects I am using. DISCOVER_TRACE_EVENT_CATEGORIES is stored in XML format and is basically a definition document of the Profiler style events. In order to consume it, the XML needs to be parsed and formatted in a better format. DISCOVER_XEVENT_PACKAGES is the object that lets us know what area of SSAS the event is related to and is a very basic attempt at grouping some of the events into common domains. DISCOVER_XEVENT_OBJECTS is where the majority of the action resides for Extended Events. This object defines the different object types (actions, targets, maps, messages, and events – more on that in a separate article). Script Fun Now for the fun in the article! IF OBJECT_ID('tempdb..#SSASXE') IS NOT NULL BEGIN DROP TABLE #SSASXE; END; IF OBJECT_ID('tempdb..#SSASTrace') IS NOT NULL BEGIN DROP TABLE #SSASTrace; END; SELECT CONVERT(VARCHAR(MAX), xo.Name) AS EventName , xo.description AS EventDescription , CASE WHEN xp.description LIKE 'SQL%' THEN 'SSAS XEvent' WHEN xp.description LIKE 'Ext%' THEN 'DLL XEvents' ELSE xp.name END AS PackageName , xp.description AS CategoryDescription --very generic due to it being the package description , NULL AS CategoryType , 'XE Category Unknown' AS EventCategory , 'PureXEvent' AS EventSource , ROW_NUMBER() OVER (ORDER BY CONVERT(VARCHAR(MAX), xo.name)) + 126 AS EventID INTO #SSASXE FROM ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * From $system.Discover_Xevent_Objects') ) xo INNER JOIN ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * FROM $system.DISCOVER_XEVENT_PACKAGES') ) xp ON xo.package_id = xp.id WHERE CONVERT(VARCHAR(MAX), xo.object_type) = 'event' AND xp.ID <> 'AE103B7F-8DA0-4C3B-AC64-589E79D4DD0A' ORDER BY CONVERT(VARCHAR(MAX), xo.[name]); SELECT ec.x.value('(./NAME)[1]', 'VARCHAR(MAX)') AS EventCategory , ec.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS CategoryDescription , REPLACE(d.x.value('(./NAME)[1]', 'VARCHAR(MAX)'), ' ', '') AS EventName , d.x.value('(./ID)[1]', 'INT') AS EventID , d.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS EventDescription , CASE ec.x.value('(./TYPE)[1]', 'INT') WHEN 0 THEN 'Normal' WHEN 1 THEN 'Connection' WHEN 2 THEN 'Error' END AS CategoryType , 'Profiler' AS EventSource INTO #SSASTrace FROM ( SELECT CONVERT(XML, lq.[Data]) FROM OPENQUERY (SSASDIXNEUFLATIN1, 'Select * from $system.Discover_trace_event_categories') lq ) AS evts(event_data) CROSS APPLY event_data.nodes('/EVENTCATEGORY/EVENTLIST/EVENT') AS d(x) CROSS APPLY event_data.nodes('/EVENTCATEGORY') AS ec(x) ORDER BY EventID; SELECT ISNULL(trace.EventCategory, xe.EventCategory) AS EventCategory , ISNULL(trace.CategoryDescription, xe.CategoryDescription) AS CategoryDescription , ISNULL(trace.EventName, xe.EventName) AS EventName , ISNULL(trace.EventID, xe.EventID) AS EventID , ISNULL(trace.EventDescription, xe.EventDescription) AS EventDescription , ISNULL(trace.CategoryType, xe.CategoryType) AS CategoryType , ISNULL(CONVERT(VARCHAR(20), trace.EventSource), xe.EventSource) AS EventSource , xe.PackageName FROM #SSASTrace trace FULL OUTER JOIN #SSASXE xe ON trace.EventName = xe.EventName ORDER BY EventName; Thanks to the level of maturity with XEvents in SSAS, there is some massaging of the data that has to be done so that we can correlate the trace events to the XEvents events. Little things like missing EventIDs in the XEvents events or missing categories and so forth. That’s fine, we are able to work around it and produce results similar to the following. If you compare it to the GUI, you will see that it is somewhat similar and should help bridge the gap between the metadata and the GUI for you. Put a bow on it Extended Events is a power tool for many facets of SQL Server. While it may still be rather immature in the world of SSAS, it still has a great deal of benefit and power to offer. Getting to know XEvents in SSAS can be a crucial skill in improving your Data Superpowers and it is well worth the time spent trying to learn such a cool feature. Interested in learning more about the depth and breadth of Extended Events? Check these out or check out the XE website here. Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the seventh article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Your Quick Introduction to Extended Events in Analysis Services first appeared on SQL RNNR. Related Posts: Extended Events Gets a New Home May 18, 2020 Profiler for Extended Events: Quick Settings March 5, 2018 How To: XEvents as Profiler December 25, 2018 Easy Open Event Log Files June 7, 2019 Azure Data Studio and XEvents November 21, 2018 The post Your Quick Introduction to Extended Events in Analysis Services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 8797