Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-storage-savings-with-table-compression-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Storage savings with Table Compression from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
In one of my recent assignments, my client asked me for a solution, to reduce the disk space requirement, of the staging database of an ETL workload. It made me study and compare the Table Compression feature of SQL Server. This article will not explain Compression but will compare the storage and performance aspects of Compressed vs Non Compressed tables. I found a useful article on Compression written by Gerald Britton. It’s quite comprehensive and covers most of the aspects of Compression. For my POC, I made use of the SSIS package. I kept 2 data flows, with the same table and file structure, but one with Table Compression enabled and another without Table Compression. Table and file had around 100 columns with only VARCHAR datatype, since the POC was for Staging database, to temporarily hold the raw data from flat files. I’d to also work on the conversion of flat file source output columns, to make it compatible with the destination SQL Server table structure. The POC was done with various file sizes because we also covered the POC for identifying the optimal value for file size. So we did 2 things in a single POC – Comparison of Compression and finding the optimal file size for the ETL process. The POC was very simple, with 2 data flows. Both had flat files as source and SQL Server table as the destination. Here is the comparison recorded post POC. I think you would find it useful in deciding if it’s worth implementing Compression in your respective workload. Findings Space-saving: Approx. 87% of space-saving. Write execution time: No difference. Read execution time: Slight / negligible difference. The plain SELECT statement was executed for comparing the Read execution time. The Compressed table took 10-20 seconds more, which is approx. <2%. As compared to the disk space saved, this slight overhead was acceptable in our workload. However, you need to review thoroughly your case before taking any decision. The post Storage savings with Table Compression appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 10130

article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 10083

article-image-github-for-unity-1-0-is-here-with-git-lfs-and-file-locking-support
Sugandha Lahoti
19 Jun 2018
3 min read
Save for later

GitHub for Unity 1.0 is here with Git LFS and file locking support

Sugandha Lahoti
19 Jun 2018
3 min read
GitHub for Unity is now available in version 1. GitHub for Unity 1.0 is a free and open source Unity editor extension that brings Git into Unity 5.6, 2017.x, and 2018.x. GitHub for Unity was announced as an alpha version in March 2017.  The beta version was released earlier this year. Now the full release GitHub for Unity 1.0 is available just in time for Unite Berlin 2018, scheduled to happen on June 19-21. GitHub for Unity 1.0 allows you to stay in sync with your team as you can now collaborate with other developers, pull down recent changes, and lock files to avoid troublesome merge conflicts. It also introduces two key features for game developers and their teams for managing large assets and critical scene files using Git, with the same ease of managing code files. Updates to Git LFS GitHub for Unity 1.0 has improved Git and Git LFS support for Mac. Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git. Previously, the package included full portable installations of the Git and Git LFS. Now, these are downloaded when needed, reducing the package size to 1.6MB. Critical Git and Git LFS updates and patches are distributed faster and in a more flexible way now. File locking File locking management is now a top-level view within the GitHub window. With this new feature developers can now lock or unlock multiple files. Other features include: Diffing support to visualize changes to files. The diffing program can be customized (set in the “Unity Preferences” area) directly from the “Changes” view in the GitHub window. No hassles of command line, as developers can now view project history, experiment in branches, craft a commit from their changes and push their code to GitHub without leaving Unity. A Git action bar for essential operations. Game developers will now get a notification within Unity whenever a new version is available. They can choose to download or skip the current update. Easy email sign in. Developers can sign in to their GitHub account with their GitHub username or the email address associated with their account. GitHub for Unity 1.0 is available for download at unity.github.com and from the Unity Asset Store. Lead developer at Unity, Andreia Gaita will conduct a GitHub for Unity talk on June 19 at Unite Berlin to explain how to incorporate Git into your game development workflow. Put your game face on! Unity 2018.1 is now available Unity announces a new automotive division and two-day Unity AutoTech Summit AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 10076

article-image-kali-linux-2019-4-released-with-xfce-a-new-desktop-environment-a-new-gtk3-theme-and-much-more
Savia Lobo
27 Nov 2019
3 min read
Save for later

Kali Linux 2019.4 released with Xfce, a new desktop environment, a new GTK3 theme, and much more!

Savia Lobo
27 Nov 2019
3 min read
On November 26, the Kali Linux team announced its fourth and final release of 2019, Kali Linux 2019.4, which is readily available for download. A few features of Kali Linux 2019.4 include a new default desktop environment, Xfce; a new GTK3 theme (for Gnome and Xfce); Kali Undercover” mode, the kernel has been upgraded to version 5.3.9, and much more. Talking about ARM the team highlighted, “2019.4 is the last release that will support 8GB sdcards on ARM. Starting in 2020.1, a 16GB sdcard will be the minimum we support.” What’s new in Kali Linux 2019.4? New desktop environment, Xfce and GTK3 theme The much-awaited desktop environment update is here. The older versions had certain performance issues resulting in fractured user experience. To address this, they developed a new theme running on Xfce. Its lightweight design can run on all levels of Kali installs. The new theme can handle various needs of the average user with no changes. It uses standard UI concepts and there is no learning curve to it. It looks great with modern UI elements that make efficient use of screen space. Kali Undercover mode For pentesters doing their work in a public environment, the team has made a little script that will change the user’s Kali theme to look like a default Windows installation. This way, users can work a bit more incognito. “After you are done and in a more private place, run the script again and you switch back to your Kali theme. Like magic!”, the official blog post reads. BTRFS during setup Another significant new addition to the documentation is the use of BTRFS as a root file system. This gives users the ability to do file system rollbacks after upgrades. In cases when users are in a VM and about to try something new, they will often take a snapshot in case things go wrong. However, running Kali bare metal is not easy. There is also a manual clean up included. With BTRFS, users can have a similar snapshot capability on a bare metal install! NetHunter Kex – Full Kali Desktop on Android phones With NetHunter Kex, users can attach their Android devices to an HDMI output along with Bluetooth keyboard and mouse and get a full, no compromise, Kali desktop from their phones. To get a full breakdown on how to use NetHunter Kex, check out its official documents on the Kali Linux website. Kali Linux users are excited about this release and look forward to trying the newly added features. https://twitter.com/firefart/status/1199372224026861568 https://twitter.com/azelhajjar/status/1199648846470615040 To know more about other features in detail, read the Kali Linux 2019.4  official release on Kali Linux website. Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] Kali Linux 2019.1 released with support for Metasploit 5.0 Kali Linux 2018 for testing and maintaining Windows security – Wolf Halton and Bo Weaver [Interview]
Read more
  • 0
  • 0
  • 10067

article-image-drupal-9-will-be-released-in-2020-shares-dries-buytaert-drupals-founder
Bhagyashree R
14 Dec 2018
2 min read
Save for later

Drupal 9 will be released in 2020, shares Dries Buytaert, Drupal’s founder

Bhagyashree R
14 Dec 2018
2 min read
At Drupal Europe 2018, Dries Buytaert, the founder and lead developer of the Drupal content management system announced that Drupal 9 will be released in 2020. Yesterday, he shared a much detailed timeline for Drupal 9, according to which it is planned to release on June 3, 2020. One of the biggest dependency of Drupal 8 is Symfony 3 and it is scheduled to reach its end-of-life by November 21. This means that no security bugs in Symfony 3 will be fixed and people have to move to Drupal 9 for better support and security. Going by the plan, the site owners will have at least one year to upgrade from Drupal 8 to Drupal 9. Drupal 9 will not have a separate code base, rather the team is adding new functionalities in Drupal 8 as backward-compatible code and experimental features. Once they are sure that these features are stable, any old functionalities will be deprecated. One of the most notable update will be, support for Symfony 4 or 5 in Drupal 9. Since, Symfony 5 is not yet released the scope of its changes will not be clear to the Drupal team. They are focusing on running Drupal 8 with Symfony 4. The final goal is to make Drupal 8 work with Symfony 3, 4 or 5 so that any issues encountered can be fixed before they start requiring Symfony 4 or 5 in Drupal 9. As Drupal 9 is being build in Drupal 8, this will make things much easier for every stakeholder. Drupal core contributors will just have to remove the deprecated functionalities and upgrade the dependencies. For site owners it will be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Dries Buytaert in his post said, “Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.” You can read the full announcement on Drupal's website. WordPress 5.0 (Bebo) released with improvements in design, theme and more 5 things to consider when developing an eCommerce website Introduction to WordPress Plugin
Read more
  • 0
  • 0
  • 10065

article-image-mongodb-relational-4-0-release
Amey Varangaonkar
16 Apr 2018
2 min read
Save for later

MongoDB going relational with 4.0 release

Amey Varangaonkar
16 Apr 2018
2 min read
MongoDB is, without a doubt, the most popular NoSQL database today. Per the Stack Overflow Developer Survey, more developers have been wanting to work with MongoDB than any other database over the last two years. With the upcoming MongoDB 4.0 release, it plans to up the ante by adding support for multi-document transactions and ACID-based features (Atomicity Consistency Integrity and Durability). Poised to be released this summer, MongoDB 4.0 will combine the speed, flexibility and the efficiency of document models - features which make MongoDB such a great database to use - with the assurance of transactional integrity. This new addition should give the database a more relational feel, and would suit large applications with high data integrity needs regardless of how the data is modeled. It has also ensured that the support for multi-document transactions will not affect the overall speed and performance of the unrelated workloads running concurrently. MongoDB have been working on this transactional integrity feature for over 3 years now, ever since they incorporated the WiredTiger storage engine. The MongoDB 4.0 release should also see the introduction of some other important features such as snapshot isolation, a consistent view of data, ability to roll-back transactions and other ACID features. Per the 4.0 product roadmap, 85% of the work is already done, and the release seems to be on time to hit the market. You can read more about the announcement on MongoDB’s official page.You can also join the beta program to test out the newly added features in 4.0.  
Read more
  • 0
  • 0
  • 10052
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-salesforce-open-sources-transmogrifai-automated-machine-learning-library
Sugandha Lahoti
17 Aug 2018
2 min read
Save for later

Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library

Sugandha Lahoti
17 Aug 2018
2 min read
Salesforce has open sourced TransmogrifAI, their end-to-end automated machine learning library for structured data. This library is currently used in production to help power Salesforce Einstein AI platform. TransmogrifAI enables data scientists at Salesforce to transform customer data into meaningful, actionable predictions.  Now, they have open-sourced this project to enable other developers and data scientists to build machine learning solutions at scale, fast. TransmogrifAI is built on Scala and SparkML that automates data cleansing, feature engineering, and model selection to arrive at a performant model. It encapsulates five main components of the machine learning process: Source: Salesforce Engineering Feature Inference: TransmogrifAI allows users to specify a schema for their data to automatically extract the raw predictor and response signals as “Features”. In addition to allowing for user-specified types, TransmogrifAI also does inference of its own. The strongly-typed features allow developers to catch a majority of errors at compile-time rather than run-time. Transmogrification or automated feature engineering: TransmogrifAI comes with a myriad of techniques for all the supported feature types ranging from phone numbers, email addresses, geo-location to text data. It also optimizes the transformations to make it easier for machine learning algorithms to learn from the data. Automated Feature Validation: TransgmogrifAI has algorithms that perform automatic feature validation to remove features with little to no predictive power. These algorithms are useful when working with high dimensional and unknown data. They apply statistical tests based on feature types, and additionally, make use of feature lineage to detect and discard bias. Automated Model Selection: The TransmogrifAI Model Selector runs several different machine learning algorithms on the data and uses the average validation error to automatically choose the best one. It also automatically deals with the problem of imbalanced data by appropriately sampling the data and recalibrating predictions to match true priors. Hyperparameter Optimization: It automatically tunes hyperparameters and offers advanced tuning techniques. This large-scale automation has brought down the total time taken to train models from weeks and months to a few hours with just a few lines of code. You can check out the project to get started with TransmogrifAI. For detailed information, read the Salesforce Engineering Blog. Salesforce Spring 18 – New features to be excited about in this release! How to secure data in Salesforce Einstein Analytics How to create and prepare your first dataset in Salesforce Einstein
Read more
  • 0
  • 0
  • 10038

article-image-microsoft-introduces-pyright-a-static-type-checker-for-the-python-language-written-in-typescript
Bhagyashree R
25 Mar 2019
2 min read
Save for later

Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript

Bhagyashree R
25 Mar 2019
2 min read
Yesterday, Microsoft released a new static type checker for Python called Pyright to fill in the gaps in existing Python type checkers like mypy. Currently, this type checker supports Python 3.0 and its newer versions. What are the type checking features Pyright brings in? It comes with support for PEP 484 (type hints including generics), PEP 526 (syntax for variable annotations), and PEP 544 (structural subtyping). It supports type inference for function return values, instance variables, class variables, and globals. It provides smart type constraints that can understand conditional code flow constructs like if/else statements. Increased speed Pyright shows 5x speed as compared to mypy and other existing type checkers written in Python. It was built keeping large source bases in mind and can perform incremental updates when files are modified. No need for setting up a Python Environment Since Pyright is written in TypeScript and runs within Node, you do not need to set up a Python environment or import third-party packages for installation. This proves really helpful when using the VS Code editor, which has Node as its extension runtime. Flexible configurability Pyright enables users to have granular control over settings. You can specify different execution environments for different subsets of a source base. For each environment, you can specify different PYTHONPATH settings, Python version, and platform target. To know more in detail about Pyright, check out its GitHub repository. Debugging and Profiling Python Scripts [Tutorial] Python 3.8 alpha 2 is now available for testing Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’  
Read more
  • 0
  • 0
  • 9909

article-image-microsoft-announces-internet-explorer-10-will-reach-end-of-life-by-january-2020
Bhagyashree R
30 Jan 2019
2 min read
Save for later

Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020

Bhagyashree R
30 Jan 2019
2 min read
Along with Windows 7, Microsoft is also ending security updates and technical support for Internet Explorer 10 by January 2020 that it shared in a blog post yesterday, and users are advised to upgrade to IE11 by then. Support for IE10 and below ended back in 2016, except on a few environments like Windows server 2012 and some embedded versions and now Microsoft is just pulling the plug on those few remaining environments. Microsoft on their blog post wrote, “We encourage you to use the time available to pilot IE11 in your environments. Upgrading to the latest version of Internet Explorer will ease the migration path to Windows 10, Windows Server 2016 or 2019, or Windows 10 IoT, and unlock the next generation of technology and productivity. It will also allow you to reduce the number of Internet Explorer versions you support in your environment.” Commercial customers of Windows Server 2012 and Windows Embedded 8 Standard can download IE11 via the Microsoft Update Catalog or IE11 upgrade through Windows Update and Windows Server Update (WSUS) that Microsoft will publish later this year. IE10 will continue to receive updates for Windows 10, Windows Server 2016 or 2019, or Windows 10 IoT throughout 2019. You can find these updates on the Update Catalog and WSUS channel as a Cumulative Update for Internet Explorer 10. Similarly, updates for IE11 will be labeled as Cumulative Update Internet Explorer 11 on the Microsoft Update Catalog, Windows Update, and WSUS. Many Hacker News users are also speculating that the support of IE11 could also end by 2025. One of the users said, “If anyone is wondering about IE11, MS says "Internet Explorer 11 will continue receiving security updates and technical support for the lifecycle of the version of Windows on which it is installed. Extended support for Windows 10 ends on October 14, 2025. Extended support for Windows Server 2016 ends on January 11, 2027. Presumably one or those 2 dates could be considered the termination date for IE11.” Another Hacker News user believes, “...it is good time to start considering ending IE11 support as well, especially with Chromium-Edge coming out later this year. Edge is getting a Chromium back-end with talk of Windows 7 and 8 support. So, perhaps that's a strategy to kill IE11 too (fingers crossed).” Read the official announcement by Microsoft to know more details. Microsoft Office 365 now available on the Mac App Store Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft’s Bing ‘back to normal’ in China
Read more
  • 0
  • 0
  • 9887

article-image-google-podcasts-is-transcribing-full-podcast-episodes-for-improving-search-results
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Google Podcasts is transcribing full podcast episodes for improving search results

Bhagyashree R
28 Mar 2019
2 min read
On Tuesday, Android Police reported that Google Podcasts is automatically transcribing episodes. It is using these transcripts as metadata to help users find the podcasts they want to listen even if they don’t know its title or when it was published. Though this is coming into light now, Google’s plan of using transcripts for improving search results has already been shared even before the app was actually launched. In an interview with Pacific Content, Zack Reneau-Wedeen, Google Podcasts product manager, said that Google could “transcribe the podcast and use that to understand more details about the podcast, including when they are discussing different topics in the episode.” This is not a user-facing feature but instead works in the background. You can see the transcription of these podcasts in the web page source of the Google Podcasts web portal. After getting a hint from a user, Android Police searched for “Corbin dabbing port” instead of Corbin Davenport, a writer for Android Police. Sure enough, the app’s search engine showed Episode 312 of the Android Police Podcast, his podcast, as the top result: Source: Android Police The transcription is enabled by Google’s Cloud Speech-to-Text transcription technology. Using transcriptions of such a huge number of podcasts Google can do things like including timestamps, index the contents, and make text easily searchable. This will also allow Google to actually “understand” what is being discussed in the podcasts without having to solely rely on the not-so-detailed notes and descriptions given by the podcasters. This could prove to be quite helpful if users don’t remember much about the shows other than a quote or interesting subject matter and make searching frictionless. As a user-facing feature, this could be beneficial for both a listener and a creator. “It would be great if they would surface this as feature/benefit to both the creator and the listener. It would be amazing to be able to timestamp, tag, clip, collect and share all the amazing moments I've found in podcasts over the years, “ said a Twitter user. Read the full story on Android Police. Google announces the general availability of AMP for email, faces serious backlash from users European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google announces Stadia, a cloud-based game streaming service, at GDC 2019
Read more
  • 0
  • 0
  • 9881
article-image-logging-the-history-of-my-past-sql-saturday-presentations-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
3 min read
Save for later

Logging the history of my past SQL Saturday presentations from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
3 min read
(2020-Dec-31) PASS (formerly known as the Professional Association for SQL Server) is the global community for data professionals who use the Microsoft data platform. On December 17, 2020 PASS announced that because of COVID-19, they were ceasing all operations effective January 15, 2021. PASS has offered many training and networking opportunities, one of such training streams was SQL Saturday. PASS SQL Saturday was free training events were designed to expand knowledge sharing and learning experience for data professionals. Photo by Daniil Kuželev on Unsplash Since the content and historical records of SQL Saturday soon will become unavailable, I decided to log the history of all my past SQL Saturday presentations. To create this table I give full credit to André Kamman and Rob Sewell, that extracted and saved this information here: https://sqlsathistory.com/. My SQL Saturday history Date Name Location Track Title 2016/04/16 SQLSaturday #487 Ottawa 2016 Ottawa Analytics and Visualization Excel Power Map vs. Power BI Globe Map visualization 2017/01/03 SQLSaturday #600 Chicago 2017 Addison BI Information Delivery Power BI with Narrative Science: Look Who's Talking! 2017/09/30 SQLSaturday #636 Pittsburgh 2017 Oakdale BI Information Delivery Geo Location of Twitter messages in Power BI 2018/09/29 SQLSaturday #770 Pittsburgh 2018 Oakdale BI Information Delivery Power BI with Maps: Choose Your Destination 2019/02/02 SQLSaturday #821 Cleveland 2019 Cleveland Analytics Visualization Power BI with Maps: Choose Your Destination 2019/05/10 SQLSaturday #907 Pittsburgh 2019 Oakdale Cloud Application Development Deployment Using Azure Data Factory Mapping Data Flows to load Data Vault 2019/07/20 SQLSaturday #855 Albany 2019 Albany Business Intelligence Power BI with Maps: Choose Your Destination 2019/08/24 SQLSaturday #892 Providence 2019 East Greenwich Cloud Application Development Deployment Continuous integration and delivery (CI/CD) in Azure Data Factory 2020/01/02 SQLSaturday #930 Cleveland 2020 Cleveland Database Architecture and Design Loading your Data Vault with Azure Data Factory Mapping Data Flows 2020/02/29 SQLSaturday #953 Rochester 2020 Rochester Application Database Development Loading your Data Vault with Azure Data Factory Mapping Data Flows Closing notes I think I have already told this story a couple of times. Back in 2014 - 2015, I started to attend SQL Saturday training events in the US by driving from Toronto. At that time I only spoke a few times at our local user group and had never presented at SQL Saturdays.  So while I was driving I needed to pass a custom control at the US border and a customs officer would usually ask me a set of questions about the place of my work, my citizenship, and the destination of my trip. I answered him that I was going to attend an IT conference, called SQL Saturday, a free event for data professionals. At that point, the customs officer positively challenged me and told me that I needed to start teaching others based on my long experience in IT, we laughed, and then he let me pass the border.  I’m still very thankful to that US customs officers for this positive affirmation. SQL Saturdays have been a great journey for me! The post Logging the history of my past SQL Saturday presentations appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 9880

article-image-blender-2-8-released-with-a-revamped-user-interface-and-a-high-end-viewport-among-others
Natasha Mathur
26 Dec 2018
2 min read
Save for later

Blender 2.8 beta released with a revamped user interface, and a high-end viewport among others

Natasha Mathur
26 Dec 2018
2 min read
The Blender team released beta version 2.8 of its Blender, a free and open-source 3D creation software, earlier this week. Blender 2.8 beta comes with new features and updates such as EEVEE, a high-end Viewport, Collections, Cycles, and 2D animation among others. Blender is a 3D creation suite that offers the entirety of the 3D pipeline including modeling, rigging, animation, simulation, rendering, compositing, and motion tracking. It allows video editing as well as game creation. What’s new in Blender 2.8 Beta? EEVEE Blender 2.8 beta comes with EEVEE, a new physically based real-time renderer. EEVEE works as a renderer for final frames, and also as the engine driving Blender’s real-time viewport. It consists of advanced features like volumetrics, screen-space reflections and refractions, subsurface scattering, soft and contact shadows, depth of field, camera motion blur and bloom. A new 3D Viewport There's a new and modern 3D viewport that was completely rewritten. It can help optimize the modern graphics cards as well as add powerful new features. It consists of a workbench engine that helps visualize your scene in flexible ways. EEVEE also helps power the viewport to enable interactive modeling and painting with PBR materials. 2D Animation There are a new and improved 2D drawing capabilities, which include a new Grease Pencil. Grease Pencil is a powerful and new 2D animation system that was added, with a native 2D grease pencil object type, modifier, and shader effects. In a nutshell, it helps to create a user-friendly interface for the 2D artist. Collections Blender 2.8 beta introduces ‘collections’, a new concept that lets you organize your scene with the help of Collections and View Layers. Cycles Blender 2.8 beta comes with a new feature called Cycles that includes new principled volume and hair shaders, bevel and ambient occlusion shaders, along with many other improvements and optimizations. Other features Dependency Graph: In blender 2.8 beta, the core object evaluation and computation system have been rewritten. Blender offers better performance for modern many-core CPUs as well as for new features in the future releases. Multi-object editing: Blender 2.8 beta comes with multiple-object editing that allows you to enter edit modes for multiple objects together. For more information, check out the official Blender 2.8 beta release notes. Mozilla partners with Khronos Group to bring glTF format to Blender Building VR objects in React V2 2.0: Getting started with polygons in Blender Blender 2.5: Detailed Render of the Earth from Space
Read more
  • 0
  • 0
  • 9860

article-image-golang-plans-to-add-a-core-implementation-of-an-internal-language-server-protocol
Prasad Ramesh
24 Sep 2018
3 min read
Save for later

Golang plans to add a core implementation of an internal language server protocol

Prasad Ramesh
24 Sep 2018
3 min read
Go, the popular programming language is adding an internal language server protocol (LSP). This is expected to bring features like code autocompletion and diagnostics available in Golang. LSP is used between a user and a server to integrate features such as autocomplete, go to definition, find all references and alike into the tool. It was created by Microsoft to define a common language for enabling programming language analyzers to communicate. It is growing in popularity with adoption from companies like Codenvy, Red Hat, and Sourcegraph. There is also a rapidly growing list of editor and language communities supporting LSP. Golang already has a language server available on GitHub. This version has support for Hover jump to def, workspace symbols, and find references. But, it does not support code completion and diagnostics. Sourcegraph CEO Quinn Slack stated in a comment on Hacker News: “The idea is that with a Go language server becoming a core part of Go, it will have a lot more resources invested into it and it will surpass where the current implementation is now.” The Go language server made by Sourcegraph available currently on GitHub is not a core part of Golang. It uses tools and custom extensions not maintained by the Go team. The hope is that the core LSP implementation will be good enough and that SourceGraph can re-use this implementation in the future to bring down the number of implementations to just one. Slack said in a comment that they are very happy with this implementation: “We are 10,000% supportive of this, as we've discussed openly in the golang-tools group and with the Go team. The Go team was commendably empathetic about the optics here, and we urged them very, very, very directly to do this.” This core implementation of LSP by the Golang team is also beneficial for Sourcegraph from a business perspective. Sourcegraph sells a product that lets you search and browse all your code, which involves using language servers for certain features like hovers, definitions and references. Since the core work will be done by the Golang team, Sourcegraph won’t have to invest more time into building their implementation of Go language server. For more information, visit the Googlesource website. Golang 1.11 is here with modules and experimental WebAssembly port among other updates Why Golang is the fastest growing language on GitHub Go 2 design drafts include plans for better error handling and generics
Read more
  • 0
  • 0
  • 9820
article-image-introducing-jupytext-jupyter-notebooks-as-markdown-documents-julia-python-or-r-scripts
Natasha Mathur
11 Sep 2018
2 min read
Save for later

Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts

Natasha Mathur
11 Sep 2018
2 min read
Project Jupyter released Jupytext, last week, a new project which allows you to convert Jupyter notebooks to and from Julia, Python or R scripts (extensions .jl, .py and .R), markdown documents (extension .md), or R Markdown documents (extension .Rmd). It comes with features such as writing notebooks as plain text, paired notebooks, command line conversion, and round-trip conversion. It is available from within Jupyter. It allows you to work as you would usually do on your notebook in Jupyter, and save and read it in the formats you select. Let’s have a look at its major features.  Writing notebooks as plain text Jupytext allows plain scripts that you can draft and test in your favorite IDE and open naturally as notebooks in Jupyter. You can run the notebook in Jupyter for generating output, associating a .ipynb representation, along with saving and sharing your research. Paired Notebooks Paired notebooks let you store a .ipynb file alongside the text-only version. Paired notebooks can be enabled by adding a jupytext_formats entry to the notebook metadata with Edit/Edit Notebook Metadata in Jupyter's menu. On saving the notebook, both the Jupyter notebook and the python scripts are updated. Command line conversion There’s a jupytext script present for command line conversion between the various notebook extensions: jupytext notebook.ipynb --to md --test      (Test round-trip conversion) jupytext notebook.ipynb --to md --output      (display the markdown version on screen) jupytext notebook.ipynb --to markdown           (create a notebook.md file) jupytext notebook.ipynb --to python               (create a notebook.py file) jupytext notebook.md --to notebook              (overwrite notebook.ipynb) (remove outputs) Round-trip conversion Round-trip conversion is also possible with Jupytext. Converting Script to Jupyter notebook to script again is identity, meaning that on associating a Jupyter kernel with your notebook, the information will go to a yaml header at the top of your script. Converting Markdown to Jupyter notebook to Markdown is again identity. Converting Jupyter to script, then back to Jupyter preserves source and metadata. Similarly, converting Jupyter to Markdown, and Jupyter again preserves source and metadata (cell metadata available only for R Markdown). For more information on, check out the official release notes. 10 reasons why data scientists love Jupyter notebooks Is JupyterLab all set to phase out Jupyter Notebooks? How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts
Read more
  • 0
  • 0
  • 9769

article-image-darpa-on-the-hunt-to-catch-deepfakes-with-its-ai-forensic-tools-underway
Natasha Mathur
08 Aug 2018
5 min read
Save for later

DARPA on the hunt to catch deepfakes with its AI forensic tools underway

Natasha Mathur
08 Aug 2018
5 min read
The U.S. Defense Advanced Research Projects Agency ( DARPA) has come out with AI-based forensic tools to catch deepfakes, first reported by MIT technology review yesterday. According to MIT Technology Review, the development of more tools is currently under progress to expose fake images and revenge porn videos on the web. DARPA’s deepfake mission project was announced earlier this year. Alec Baldwin on Saturday Night Live face swapped with Donald Trump As mentioned in the MediFor blog post, “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns”. This is one of the major reasons why DARPA Forensics experts are keen on finding methods to detect deepfakes videos and images How did deepfakes originate? Back in December 2017, a Reddit user named “DeepFakes” posted extremely real-looking explicit videos of celebrities. He used deep learning techniques to insert celebrities’ faces into adult movies. Using Deep learning, one can combine and superimpose existing images and videos onto original images or videos to create realistic-seeming fake videos. As per the MIT technology review,“Video forgeries are done using a machine-learning technique -- generative modeling -- lets a computer learn from real data before producing fake examples that are statistically similar”. Video tampering is done using two neural networks -- generative adversarial networks which work in conjunction “to produce ever more convincing fakes”. Why are deepfakes toxic? An app named FakeApp was released earlier this year which helped create deepfakes quite easily. FakeApp uses neural networking tools developed by Google's AI division. The app trains itself to perform image-recognition tasks using trial and error. Ever since its release, the app has been downloaded more than 120,000 times. In fact, there are tutorials online on how to create deepfakes. Apart from this, there are regular requests on deepfake forums, asking users for help in creating face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. Deepfakes is even be used to create fake news such as world leaders declaring war on a country. The toxic potential of this technology has led to a growing concern as deepfakes have become a powerful tool for harassing people. Once deepfakes found their way on the world wide web, many websites such as Twitter and PornHub, banned them from being posted on their platforms. Reddit also announced a ban on deepfakes, earlier this year, killing The “deepfakes” subreddit which had more than 90,000 subscribers, entirely. MediFor: DARPA’s AI weapon to counter deepfakes DARPA’s Media Forensics group, also known as MediFor, works in a group along with other researchers is set on developing AI tools for deepfakes. It is currently focusing on four techniques to catch the audiovisual discrepancies present in a forged video. This includes analyzing lip sync, detecting speaker inconsistency, scene inconsistency and content insertions. One technique comes from a team led by Professor Siwei Lyu of SUNY Albany. Lyu mentioned that they “generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well”. As the deepfakes are created using static images, Lyu noticed that that the faces in deepfakes videos rarely blink and that eye-movement, if present, is quite unnatural. An academic paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking," by Yuezun Li, Ming-Ching Chang and Siwei Lyu explains a method to detect forged videos. It makes use of Long-term Recurrent Convolutional Networks (LRCN). According to the research paper, people, on an average, blink about 17 times a minute or 0.283 times per second. This rate increases with conversation and decreases while reading. There are a lot of other techniques which are used for eye blink detection such as detecting the eye state by computing the vertical distance between eyelids, measuring eye aspect ratio ( EAR ), and using the convolutional neural network (CNN) to detect open and closed eye states. But, Li, Chang, and Lyu use a different approach. They rely on  Long-term Recurrent Convolutional Networks (LRCN) model. They first perform pre-processing to identify facial features and normalize the video frame orientation. Then, they pass cropped eye images into the LRCN for evaluation. This technique is quite effective. It is also better as compared to other approaches, with a reported accuracy of 0.99 (LRCN) compared to 0.98 (CNN) and 0.79 (EAR). However, Lyu says that a skilled video editor can fix the non-blinking deepfakes by using images that shows blinking eyes. But, Lyu’s team has a secret effective technique in the works to fix even that, though he hasn’t divulged any details. Others in DARPA are on the look-out for similar cues such as strange head movements, odd eye color, etc as these little details are leading the team even closer to detection of deepfakes. As mentioned in the MIT Technology review post, “the arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths” and how”. Also, MediFor states that “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video”. Deepfakes need to stop and the U.S. Defense Advanced Research Projects Agency ( DARPA) seems all set to fight against them. Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news A new WPA/WPA2 security attack in town: Wi-fi routers watch out! YouTube has a $25 million plan to counter fake news and misinformation  
Read more
  • 0
  • 25
  • 9765