Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-cache-is-king-announcing-lower-pricing-for-cloud-cdn-from-cloud-blog
Matthew Emerick
14 Oct 2020
2 min read
Save for later

Cache is king: Announcing lower pricing for Cloud CDN from Cloud Blog

Matthew Emerick
14 Oct 2020
2 min read
Organizations all over the world rely on Cloud CDN for fast, reliable web and video content delivery. Now, we’re making it even easier for you to take advantage of our global network and cache infrastructure by reducing the cost of Cloud CDN for your content delivery going forward. First, we’re reducing the price of cache fill (content fetched from your origin) charges across the board, by up to 80%. You still get the benefit of our global private backbone for cache fill though—ensuring continued high performance, at a reduced cost. We’ve also removed cache-to-cache fill charges and cache invalidation charges for all customers going forward. This price reduction, along with our recent introduction of a new set of flexible caching capabilities, makes it even easier to use Cloud CDN to optimize the performance of your applications. Cloud CDN can now automatically cache web assets, video content or software downloads, control exactly how they should be cached, and directly set response headers to help meet web security best practices. You can review our updated pricing in our public documentation, and customers egressing over 1PB per month should reach out to our sales team to discuss commitment-based discounts as part of your migration to Google Cloud. To read more about Cloud CDN, or begin using it, start here.
Read more
  • 0
  • 0
  • 1463

article-image-why-you-should-never-run-a-logistic-regression-unless-you-have-to-from-featured-blog-posts-data-science-central
Matthew Emerick
14 Oct 2020
2 min read
Save for later

Why you should NEVER run a Logistic Regression (unless you have to) from Featured Blog Posts - Data Science Central

Matthew Emerick
14 Oct 2020
2 min read
Hello fellow Data Science-Centralists! I wrote a post on my LinkedIn about why you should NEVER run a Logistic Regression. (Unless you really have to). The main thrust is: There is no theoretical reason why a least squares estimator can't work on a 0/1. There are very very narrow theoretical reasons that you want to run a logistic, and unless you fall into those categories it's not worth the time. The run time of a logistic can be up to 100x longer than an OLS model. If you are doing v-fold cross-validation save yourself some time. The XB's are exactly the same whether you use a Logistic or a linear regression. The model specification (features, feature engineering, feature selection, interaction terms) are identical -- and this is what you should be focused on anyways. Myth: Linear regression can only run linear models. There is *one* practical reason to run a logistic: if the results are all very close to 0 or to 1, and you can't hard code your prediction to 0 or 1 if the linear models falls outside a normal probability range, then use the logistic. So if you are pricing an insurance policy based on risk, you can't have a hard-coded 0.000% prediction because you can't price that correctly. See video here and slides here. I think it'd be nice to start a debate on this topic!
Read more
  • 0
  • 0
  • 832

article-image-aws-free-tier-what-is-it-and-who-qualifies-from-android-development-android-authority
Matthew Emerick
14 Oct 2020
5 min read
Save for later

AWS Free Tier: what is it and who qualifies? from Android Development – Android Authority

Matthew Emerick
14 Oct 2020
5 min read
With AWS Free Tier, even the smallest businesses and developers can benefit from powerful cloud infrastructure. Amazon Web Services (AWS) is a cloud platform that provides businesses with a wide range of powerful online tools. These include tools to increase security, backup data, handle machine learning tasks, synchronize accounts across numerous devices, and much more. In short, companies use AWS to provide services that they could not otherwise. See also: What is AWS certification? But here’s the good news: many of these services are free thanks to the “AWS free tier.” In many cases, these services come with severe restrictions and/or time limits. But in some instances, services are genuinely free on a permanent basis. The tricky part? Discerning one from the other. So, don’t assume that you can’t afford AWS products! Read on to find out just what you are eligible for and how to start using the services without incurring unexpected bills. What does AWS Free Tier offer? AWS Free Tier gives you access to over 85 products from Amazon. There is variation in how long they remain free and whether any restrictions are imposed. Credit: Adam Sinicki / Android Authority For example, you can get up to 5GB of storage completely free, or 250 hours of Amazon SageMaker (for building, managing, and deploying machine learning models). Or how about 30 days of GuardDuty for intelligent threat detection? Amazon Comprehend offers powerful natural language processing up to 5 million characters per month, fully free for the first 12 months. Meanwhile, services such as Amazon DynamoDB, a powerful and scalable NoSQL database, are available in the free tier with up to 25GB of storage and no time limit. Amazon Chime, on the other hand, is a video conferencing service that is permanently free with unlimited usage. You can find a full list of the AWS Free Tier products at aws.amazon.com. Sounds great, right? Well, it is. But you also need to be careful as there are many clauses, caveats, and exceptions that are easy to miss. See also: The best AWS courses for professionals The issue revolves around the restrictions and caps. For example, should you exceed the character restrictions using Amazon Comprehend, you will no longer be eligible for the AWS Free Tier. That means you need to carefully monitor usage to avoid an unexpected charge. In some cases, this can be extremely confusing as the metrics used to cap usage are varied and sometimes seemingly contradictory. What happens when the free period runs out? So, let’s say that you’ve discovered an amazing free AWS product that lets you scale your business or add additional services. What happens when the free period runs out? As you might expect, once the Free Tier period expires, you simply switch to a paid plan. However, there is one very important catch to understand: AWS Free Tier Eligibility applies across the entire account. That is to say that if you exceed usage/time restrictions on just one of the products offered, this will end the Free Tier for your entire account. So, if you are using multiple Free Trials and one of them expires, the next month’s bill could be rather large. While it’s possible to receive an alert when you begin to exceed your Free Tier allowance on certain products, these alerts are generally delayed by 24 hours – so it may be too late. The fact that there’s no option to simply freeze the service at this point, suggests that Amazon might be hoping that will be the case… Who is eligible for AWS Free Tier? Think that AWS could benefit your business? Fortunately, the AWS Free Tier comes into effect the moment you create an account. If you have never used an AWS product before, you are automatically eligible for Free Tier. That will be applied by default. However, if you have used AWS in the past, eligibility depends on the previous usage. As long as none of the products you have trialed have exceeded the limitations of the Free Tier, and your account is under one year old, then you should still be able to make use of free trials. Unsure of your standing? You can check manually by heading to the Billing and Cost Management console. If your account is still eligible for AWS Free Tier, you will see the following message: Credit: Adam Sinicki / Android Authority Conclusions On paper, AWS Free Tier is an amazing resource for businesses that want to leverage the power of the cloud to scale up. Unfortunately, the pricing system is anything but transparent. This is often cited as one of the major drawbacks of AWS when compared to competing packages such as Microsoft Azure or Google Cloud Platform. If you’re a developer just toying around with these services, it might be safer to choose one of those options. Both of them offer some form of a free trial. See also: AWS vs Azure vs Google Cloud – Which certification is best for professionals? Hopefully, this is something that Amazon will remedy if the competition applies enough pressure. For now, if you decide to take advantage of AWS Free Tier, just make sure to read the fine print!
Read more
  • 0
  • 0
  • 1685

article-image-applications-of-machine-learning-in-fintech-from-featured-blog-posts-data-science-central
Matthew Emerick
14 Oct 2020
1 min read
Save for later

Applications of Machine Learning in FinTech from Featured Blog Posts - Data Science Central

Matthew Emerick
14 Oct 2020
1 min read
Machine learning is a type of artificial intelligence that provides computers with the ability to learn without being explicitly programmed. The science behind machine learning is interesting and application-oriented. Many startups have disrupted the FinTech ecosystem with machine learning as their key technology. There are various applications of machine learning used by the FinTech companies falling under different subcategories. Let us look at some of the applications of machine learning and companies using such applications. Table of contents Predictive Analysis for Credit Scores and Bad Loans Accurate Decision-Making Content/Information Extraction Fraud Detection and Identity Management To read the whole article, with each point detailed, click here.
Read more
  • 0
  • 0
  • 1094
Banner background image

article-image-zone-redundancy-for-azure-cache-for-redis-now-in-preview-from-microsoft-azure-blog-announcements
Matthew Emerick
14 Oct 2020
3 min read
Save for later

Zone Redundancy for Azure Cache for Redis now in preview from Microsoft Azure Blog > Announcements

Matthew Emerick
14 Oct 2020
3 min read
Between waves of pandemics, hurricanes, and wildfires, you don’t need cloud infrastructure adding to your list of worries this year. Fortunately, there has never been a better time to ensure your Azure deployments stay resilient. Availability zones are one of the best ways to mitigate risks from outages and disasters. With that in mind, we are announcing the preview for zone redundancy in Azure Cache for Redis. Availability Zones on Azure Azure Availability Zones are geographically isolated datacenter locations within an Azure region, providing redundant power, cooling, and networking. By maintaining a physically separate set of resources with the low latency from remaining in the same region, Azure Availability Zones provide a high availability solution that is crucial for businesses requiring resiliency and business continuity. Redundancy options in Azure Cache for Redis Azure Cache for Redis is increasingly becoming critical to our customers’ data infrastructure. As a fully managed service, Azure Cache for Redis provides various high availability options. By default, caches in the standard or premium tier have built-in replication with a two-node configuration—a primary and a replica hosting two identical copies of your data. New in preview, Azure Cache for Redis can now support up to four nodes in a cache distributed across multiple availability zones. This update can significantly enhance the availability of your Azure Cache for Redis instance, giving you greater peace of mind and hardening your data architecture against unexpected disruption. High Availability for Azure Cache for Redis The new redundancy features deliver better reliability and resiliency. First, this update expands the total number of replicas you can create. You can now implement up to three replica nodes in addition to the primary node. Having more replicas generally improves resiliency (even if they are in the same availability zone) because of the additional nodes backing up the primary. Even with more replicas, a datacenter-wide outage can still disrupt your application. That’s why we’re also enabling zone redundancy, allowing replicas to be located in different availability zones. Replica nodes can be placed in one or multiple availability zones, with failover automatically occurring if needed across availability zones. With Zone Redundancy, your cache can handle situations where the primary zone is knocked offline due to issues like floods, power outages, or even natural disasters. This increases availability while maintaining the low latency required from a cache. Zone redundancy is currently only available on the premium tier of Azure Cache for Redis, but it will also be available on the enterprise and enterprise flash tiers when the preview is released. Industry-leading service level agreement Azure Cache for Redis already offers an industry-standard 99.9 percent service level agreement (SLA). With the addition of zone redundancy, the availability increases to a 99.95 percent level, allowing you to meet your availability needs while keeping your application nimble and scalable. Adding zone redundancy to Azure Cache for Redis is a great way to promote availability and peace of mind during turbulent situations. Learn more in our documentation and give it a try today. If you have any questions or feedback, please contact us at AzureCache@microsoft.com.
Read more
  • 0
  • 0
  • 2238

article-image-hans-juergen-schoenig-pg_squeeze-optimizing-postgresql-storage-from-planet-postgresql
Matthew Emerick
14 Oct 2020
8 min read
Save for later

Hans-Juergen Schoenig: pg_squeeze: Optimizing PostgreSQL storage from Planet PostgreSQL

Matthew Emerick
14 Oct 2020
8 min read
Is your database growing at a rapid rate? Does your database system slow down all the time? And maybe you have trouble understanding why this happens? Maybe it is time to take a look at pg_squeeze and fix your database once and for all. pg_squeeze has been designed to shrink your database tables without downtime. No more need for VACUUM FULL – pg_squeeze has it all. The first question any PostgreSQL person will ask is: Why not use VACUUM or VACUUM FULL? There are various reasons: A normal VACUUM does not really shrink the table in disk. Normal VACUUM will look for free space, but it won’t return this space to the operating system. VACUUM FULL does return space to the operating system but it needs a table lock. In case your table is small this usually does not matter. However, what if your table is many TBs in size? You cannot simply lock up a large table for hours just to shrink it after table bloat has ruined performance. pg_squeeze can shrink large tables using only a small, short lock. However, there is more. The following listing contains some of the operations pg_squeeze can do with minimal locking: Shrink tables Move tables and indexes from one tablespace to another Index organize (“cluster”) a table Change the on-disk FILLFACTOR After this basic introduction it is time to take a look and see how pg_squeeze can be installed and configured. PostgreSQL: Installing pg_squeeze pg_squeeze can be downloaded for free from our GitHub repository. However, binary packages are available for most Linux distributions. If you happen to run Solar, AIX, FreeBSD or some other less widespread operating system just get in touch with us. We are eager to help. After you have compiled pg_squeeze or installed the binaries some changes have to be made to postgresql.conf: wal_level = logical max_replication_slots = 10 # minimum 1 shared_preload_libraries = 'pg_squeeze' The most important thing is to set the wal_level to logical. Internally pg_squeeze works as follows: It creates a new datafile (snapshot) and then applies changes made to the table while this snapshot is copied over. This is done using logical decoding. Of course logical decoding needs replication slots. Finally the library has to be loaded when PostgreSQL is started. This is basically it – pg_squeeze is ready for action. Understanding table bloat in PostgreSQL Before we dive deeper into pg_squeeze it is important to understand table bloat in general. Let us take a look at the following example: test=# CREATE TABLE t_test (id int); CREATE TABLE test=# INSERT INTO t_test SELECT * FROM generate_series(1, 2000000); INSERT 0 2000000 test=# SELECT pg_size_pretty(pg_relation_size('t_test')); pg_size_pretty ---------------- 69 MB (1 row) Once we have imported 2 million rows the size of the table is 69 MB. What happens if we update these rows and simply add one? test=# UPDATE t_test SET id = id + 1; UPDATE 2000000 test=# SELECT pg_size_pretty(pg_relation_size('t_test')); pg_size_pretty ---------------- 138 MB (1 row) The size of the table is going to double. Remember, UPDATE has to duplicate the row which of course eats up some space. The most important observation, however, is: If you run VACUUM the size of the table on disk is still 138 MB – storage IS NOT returned to the operating system. VACUUM can shrink tables in some rare instances. However, in reality the table is basically never going to return space to the filesystem which is a major issue. Table bloat is one of the most frequent reasons for bad performance, so it is important to either prevent it or make sure the table is allowed to shrink again. PostgreSQL: Shrinking tables again If you want to use pg_squeeze you have to make sure that a table has a primary key. It is NOT enough to have unique indexes – it really has to be a primary key. The reason is that we use replica identities internally, so we basically suffer from the same restrictions as other tools using logical decoding. Let us add a primary key and squeeze the table: test=# ALTER TABLE t_test ADD PRIMARY KEY (id); ALTER TABLE test=# SELECT squeeze.squeeze_table('public', 't_test', null, null, null); squeeze_table --------------- (1 row) Calling pg_squeeze manually is one way to handle a table. It is the preferred method if you want to shrink a table once. As you can see the table is smaller than before: test=# SELECT pg_size_pretty(pg_relation_size('t_test')); pg_size_pretty ---------------- 69 MB (1 row) The beauty is that minimal locking was needed to do that. Scheduling table reorganization pg_squeeze has a builtin job scheduler which can operate in many ways. It can tell the system to squeeze a table within a certain timeframe or trigger a process in case some thresholds have been reached. Internally pg_squeeze uses configuration tables to control its behavior. Here is how it works: test=# d squeeze.tables Table "squeeze.tables" Column | Type | Collation | Nullable | Default ------------------+------------------+-----------+----------+-------------------------------------------- id | integer | | not null | nextval('squeeze.tables_id_seq'::regclass) tabschema | name | | not null | tabname | name | | not null | clustering_index | name | | | rel_tablespace | name | | | ind_tablespaces | name[] | | | free_space_extra | integer | | not null | 50 min_size | real | | not null | 8 vacuum_max_age | interval | | not null | '01:00:00'::interval max_retry | integer | | not null | 0 skip_analyze | boolean | | not null | false schedule | squeeze.schedule | | not null | Indexes: "tables_pkey" PRIMARY KEY, btree (id) "tables_tabschema_tabname_key" UNIQUE CONSTRAINT, btree (tabschema, tabname) Check constraints: "tables_free_space_extra_check" CHECK (free_space_extra >= 0 AND free_space_extra < 100) "tables_min_size_check" CHECK (min_size > 0.0::double precision) Referenced by: TABLE "squeeze.tables_internal" CONSTRAINT "tables_internal_table_id_fkey" FOREIGN KEY (table_id) REFERENCES squeeze.tables(id) ON DELETE CASCADE TABLE "squeeze.tasks" CONSTRAINT "tasks_table_id_fkey" FOREIGN KEY (table_id) REFERENCES squeeze.tables(id) ON DELETE CASCADE Triggers: tables_internal_trig AFTER INSERT ON squeeze.tables FOR EACH ROW EXECUTE FUNCTION squeeze.tables_internal_trig_func() The last column here is worth mentioning: It is a custom data type capable of holding cron-style scheduling information. The custom data type looks as follows: test=# d squeeze.schedule Composite type "squeeze.schedule" Column | Type | Collation | Nullable | Default ---------------+------------------+-----------+----------+--------- minutes | squeeze.minute[] | | | hours | squeeze.hour[] | | | days_of_month | squeeze.dom[] | | | months | squeeze.month[] | | | days_of_week | squeeze.dow[] | | | If you want to make sure that pg_squeeze takes care of a table simple insert the configuration into the table: test=# INSERT INTO squeeze.tables (tabschema, tabname, schedule) VALUES ('public', 't_test', ('{30}', '{22}', NULL, NULL, '{3, 5}')); INSERT 0 1 In this case public.t_test will be squeezed at 22:30h in the evening every 3rd and 5th day of the week. The main question is: When is that? In our setup days 0 and 7 are sundays. So 3 and 5 means wednesday and friday at 22:30h. Let us check what the configuration looks like: test=# x Expanded display is on. test=# SELECT *, (schedule).* FROM squeeze.tables; -[ RECORD 1 ]----+---------------------- id | 1 tabschema | public tabname | t_test clustering_index | rel_tablespace | ind_tablespaces | free_space_extra | 50 min_size | 8 vacuum_max_age | 01:00:00 max_retry | 0 skip_analyze | f schedule | ({30},{22},,,"{3,5}") minutes | {30} hours | {22} days_of_month | months | days_of_week | {3,5} Once this configuration is in place, pg_squeeze will automatically take care of things. Everything is controlled by configuration tables so you can easily control and monitor the inner workings of pg_squeeze. Handling errors If pg_squeeze decides to take care of a table it can happen that the reorg process actually fails. Why is that the case? One might drop a table and recreate it, the structure might change or pg_squeeze might not be able to get the brief lock at the end. Of course it is also possible that the tablespace you want to move a table too does not have enough space. There are many issues which can lead to errors. Therefore one has to track those reorg processes. The way to do that is to inspect squeeze.errors: test=# SELECT * FROM squeeze.errors; id | occurred | tabschema | tabname | sql_state | err_msg | err_detail ----+----------+-----------+---------+-----------+---------+------------ (0 rows) This log table contains all the relevant information needed to track things fast and easily. Finally … pg_squeeze is not the only Open Source tool we have published for PostgreSQL. If you are looking for a cutting edge scheduler we recommend taking a look at what pg_timetable has to offer. The post pg_squeeze: Optimizing PostgreSQL storage appeared first on Cybertec.
Read more
  • 0
  • 0
  • 1185
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-data-literacy-for-all-free-data-skills-training-for-individuals-and-organizations-from-whats-new
Matthew Emerick
13 Oct 2020
4 min read
Save for later

Introducing Data Literacy for All: Free data skills training for individuals and organizations from What's New

Matthew Emerick
13 Oct 2020
4 min read
Courtney Totten Director, Academic Programs Tanna Solberg October 13, 2020 - 9:19pm October 14, 2020 Data is becoming more pervasive at work and in our everyday lives. Whether you’re optimizing your sales organization or your fantasy football team, data is a key ingredient of success.  Although more people are familiar with data, many are still struggling with fundamental data literacy—the ability to explore, understand, and communicate with data. This is a problem, particularly since data skills are now a prerequisite for many jobs—and the demand is growing. In fact, in 2020, LinkedIn listed data-driven decision making skills like analytical reasoning and business analysis as two of the top ten most in-demand hard skills on their Top Skills Companies Need Most list.  To help address this demand, we are so excited to announce the launch of Data Literacy for All, a free eLearning program that includes over five hours of training to help anyone learn foundational data skills. Whether you are new to data, looking to accelerate in your career, or seeking a new career path, Data Literacy for All can help you develop a foundational data skillset. We hear from customers time and time again that developing data skills for employees is one of the main challenges they face when deploying analytics. Whether it’s hiring new talent with data skills on their resume, or reskilling existing employees, having a baseline of data literacy across the organization is a critical component within a Data Culture. Helping our customers fill their talent pipeline with trained candidates has been a focus of ours for many years. Since inception in 2011, Tableau Academic Programs have been driving data literacy efforts in higher education, offering free software and learning resources to enable and empower future data workers. Through these efforts, we have provided more than 1.3 million students and instructors with access to software and data skills. We will continue investing in our future generations, but we also recognized the opportunity to do more. We are thrilled to expand our work with data literacy beyond the classroom. Developing the Data Literacy for All program and coursework Data Literacy for All fills common knowledge gaps and ‘learning pain points’ to allow anyone to begin and continue their data journey.  The Data Literacy for All program includes the following courses: Introduction to Data Literacy Recognizing Well-Structured Data Exploring Variables and Field Types Exploring Aggregation and Granularity Understanding Distributions Understanding Variation for Wise Comparisons Using Correlation and Regression to Examine Relationships One of our long-term goals for this program is to open doors to more diversity within formal and informal data roles. We believe that creating a more data literate world begins with education and that foundational data skills are building blocks for our future. Making these foundational skills easy and accessible to anyone and everyone around the world is a start.   When it came to developing data literacy resources for all types of learners, we found inspiration from our Academic Programs and our existing instructor relationships. Through Tableau for Teaching we worked closely with instructors around the world who were building their own analytics programs within their institutions. So to tackle the challenge of data literacy, we took a unique approach. We hired Dr. Sue Kraemer as our first Academic Program Instructional Designer. Sue was brought on board to help us serve academia’s growing needs and to drive development of this integral education in support of our instructors in higher education. Prior to joining Tableau, Sue was an instructor at the University of Washington Bothell, where she taught Statistics and Data Visualization courses in Health Studies. As a result of her academic experience, we were able to create a training that is a bridge between foundational skills and practical business needs—a necessary balance for today’s knowledge workers. Access data literacy courses for free, starting today! Now anyone can access this training for free. We are so excited about this program, as it is a critical part of helping people see and understand data. But this is just the beginning! At Tableau we will continue to strive for additional ways to help more people become data rockstars because we believe that access to data and the right skills can truly change the world. Start your data literacy journey today!
Read more
  • 0
  • 0
  • 614

article-image-what-you-need-to-know-to-begin-your-journey-to-cdp-from-cloudera-blog
Matthew Emerick
13 Oct 2020
5 min read
Save for later

What you need to know to begin your journey to CDP from Cloudera Blog

Matthew Emerick
13 Oct 2020
5 min read
Recently, my colleague published a blog build on your investment by Migrating or Upgrading to CDP Data Center, which articulates great CDP Private Cloud Base features. Existing CDH and HDP customers can immediately benefit from this new functionality.  This blog focuses on the process to accelerate your CDP journey to CDP Private Cloud Base for both professional services engagements and self-service upgrades. Upgrade with Confidence with Cloudera Professional Services Cloudera recommends working with Cloudera Professional Services to simplify your journey to CDP Private Cloud Base and get faster time to value. Cloudera PS offers SmartUpgrade to help you efficiently upgrade or migrate to CDP Private Cloud Base with minimal disruptions to your SLAs.  Preparing for an Upgrade Whether you choose to manage your own upgrade process or leverage our Professional Services organization, Cloudera provides the tools you need to get started.  Before you Begin Contact your account team to start the process. Generate a diagnostic bundle to send information about your cluster to Cloudera Support for analysis. The diagnostic bundle consists of information about the health and performance of the cluster. Learn more about how to send diagnostic bundles. 1.On a CDH cluster, use Cloudera Manager. 2. On an HDP Cluster, use SmartSense. Gather information that diagnostic tool will not be able to automatically obtain: What is the primary purpose of the cluster? HDP customers only: Which relational database and version is used? How many database objects do you have? Which external APIs you are using? Which third party software do you use with this cluster? Create an Upgrade Planning Case To manage your own upgrade process, follow these steps to file an upgrade planning case to ensure a smooth upgrade experience: Go to the Cloudera Support Hub and click Create Case. Select Upgrade Planning.  In Product to Upgrade, select a product from the list. Choices are: Ambari HDP HDP & Ambari CDH Cloudera Manager CDH & Cloudera Manager Are you upgrading to CDP Private Cloud Base? Select Yes or No. What is your target version? Select the version of the product and the version of the Cloudera Manager or Ambari. Complete information about your assets and timeline. Attach the diagnostic bundle you created. Diagnostics will run through your bundle data to identify potential issues that need to be addressed prior to an upgrade. Include in the information that you gathered earlier in the “Before you Begin”. A case is created. CDP Upgrade Advisor The CDP Upgrade Advisor is a utility available on my.cloudera.com for Cloudera customers. This tool performs an evaluation of diagnostic data to determine the CDP readiness of your CDH or HDP cluster environment.  Running the upgrade advisor against the cluster in question is one of your first steps to adopting CDP, followed by an in-depth conversation with your Cloudera account team to review the specific results. This utility raises awareness of clusters that may present risks during an upgrade to CDP due to, for example, an unsupported of the operating system currently in use. The upgrade advisor utility is focused on the environment and platform in use but doesn’t take into consideration use-cases, the actual cluster data, or workflows in use. Analysis of these critical areas occurs as part of your CDP Journey Workshop with your Cloudera account team and Professional Services. To run the Upgrade Advisor: Click Upgrade Path to begin the evaluation based on your diagnostic data The first thing you’ll see is a list of your active assets (CDH, DataFlow, HDP, Key Trustee, and CDP assets). The upgrade advisor is available only for CDH and HDP environments. Click the respective CDP Upgrade Advisor link on the right-hand side of a CDH or HDP asset to obtain the evaluation results The Upgrade Advisor determines a recommended upgrade path for the asset in question. You may see a recommendation to upgrade to CDP Data Center (Private Cloud Base), Public Cloud, or not to upgrade at this time due to the environmental failures identified. Beneath the recommendations are details of the cluster asset being evaluated along with contact details for your Cloudera account team.   The Evaluation Details section includes the results of the validation checks being performed against your diagnostic data. This includes risks and recommendations such as a particular service or version of 3rd party software that will not be supported after an upgrade to CDP. Each category of the evaluation details also features icons that will take you to the relevant CDP documentation. You can view a video (recommended) about the Upgrade Advisor:   Validate partner certifications For partner ecosystem support for CDP, you can validate your partner application certifications with this blog:  Certified technical partner solutions help customers success with Cloudera Data Platform.  Please also work with your account team for partner technology applications that are not currently on the certified list. Learn from Customer Success Stories Take a deeper look at one customer’s journey to CDP in this blog. A financial services customer upgraded their environment from CDH to CDP with Cloudera Professional Services in order to modernize their architecture to ingest data in real-time using the new streaming features available in CDP and make the data available to their users faster than ever before.  Summary Take the next steps on your journey to CDP now by visiting my.cloudera.com to assess your clusters in the Upgrade Advisor and sign up for a trial of CDP Private Cloud Base.  To learn more about CDP, please check out the CDP Resources page.  The post What you need to know to begin your journey to CDP appeared first on Cloudera Blog.
Read more
  • 0
  • 0
  • 1338

article-image-net-framework-republishing-of-july-2020-security-only-updates-from-net-blog
Matthew Emerick
13 Oct 2020
3 min read
Save for later

.NET Framework republishing of July 2020 Security Only Updates from .NET Blog

Matthew Emerick
13 Oct 2020
3 min read
Today, we are republishing the July 2020 Security Only Updates for .NET Framework to resolve a known issue that affected the original release.  You should install this version (V2) of the update as part of your normal security routine. Security CVE-2020-1147– .NET Framework Remote Code Execution Vulnerability A remote code execution vulnerability exists in .NET Framework when the software fails to check the source markup of XML file input. An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the process responsible for deserialization of the XML content. To exploit this vulnerability, an attacker could upload a specially crafted document to a server utilizing an affected product to process content. The security update addresses the vulnerability by correcting how .NET Framework validates the source markup of XML content. This security update affects how .NET Framework’s System.Data.DataTable and System.Data.DataSet types read XML-serialized data. Most .NET Framework applications will not experience any behavioral change after the update is installed. For more information on how the update affects .NET Framework, including examples of scenarios which may be affected, please see the DataTable and DataSet security guidance document. To learn more about the vulnerabilities, go to the following Common Vulnerabilities and Exposures (CVE). CVE-2020-1147 Known Issues This release resolves the known issue below. Symptoms: After you apply this update, some applications experience a TypeInitializationException exception when they try to deserialize System.Data.DataSet or System.Data.DataTable instances from the XML within a SQL CLR stored procedure. The stack trace for this exception appears as follows: System.TypeInitializationException: The type initializer for ‘Scope’ threw an exception. —> System.IO.FileNotFoundException: Could not load file or assembly ‘System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified. at System.Data.TypeLimiter.Scope.IsTypeUnconditionallyAllowed(Type type) at System.Data.TypeLimiter.Scope.IsAllowedType(Type type) at System.Data.TypeLimiter.EnsureTypeIsAllowed(Type type, TypeLimiter capturedLimiter) Resolution: Install the latest version of this update that was released on October 13th, 2020. Getting the Update The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog. Microsoft Update Catalog You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update. **Note**: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply. The following table is for earlier Windows and Windows Server versions. Product Version Security Only Update Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 4566468 .NET Framework 3.5 Catalog 4565580 .NET Framework 4.5.2 Catalog 4565581 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4565585 .NET Framework 4.8 Catalog 4565588 Windows Server 2012 4566467 .NET Framework 3.5 Catalog 4565577 .NET Framework 4.5.2 Catalog 4565582 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4565584 .NET Framework 4.8 Catalog 4565587 Windows 7 SP1 and Windows Server 2008 R2 SP1 4566466 .NET Framework 3.5.1 Catalog 4565579 .NET Framework 4.5.2 Catalog 4565583 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4565586 .NET Framework 4.8 Catalog 4565589 Windows Server 2008 4566469 .NET Framework 2.0, 3.0 Catalog 4565578 .NET Framework 4.5.2 Catalog 4565583 .NET Framework 4.6 Catalog 4565586   The post .NET Framework republishing of July 2020 Security Only Updates appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 1872

article-image-net-framework-october-2020-security-and-quality-rollup-updates-from-net-blog
Matthew Emerick
13 Oct 2020
5 min read
Save for later

.NET Framework October 2020 Security and Quality Rollup Updates from .NET Blog

Matthew Emerick
13 Oct 2020
5 min read
Today, we are releasing the October 2020 Security and Quality Rollup Updates for .NET Framework. Security CVE-2020-16937– .NET Framework Information Disclosure Vulnerability An information disclosure vulnerability exists when the .NET Framework improperly handles objects in memory. An attacker who successfully exploited the vulnerability could disclose contents of an affected system’s memory. To exploit the vulnerability, an authenticated attacker would need to run a specially crafted application. The update addresses the vulnerability by correcting how the .NET Framework handles objects in memory. To learn more about the vulnerabilities, go to the following Common Vulnerabilities and Exposures (CVE). CVE-2020-16937 Quality and Reliability This release contains the following quality and reliability improvements. ASP.NET Disabled resuse of AppPathModifier in ASP.Net control output. HttpCookie objects in the ASP.Net request context will be created with configured defaults for cookie flags instead of .NET-style primitive defaults to match the behavior of `new HttpCookie(name)`. CLR1 Added a CLR config variable Thread_AssignCpuGroups (1 by default) that can be set to 0 to disable automatic CPU group assignment done by the CLR for new threads created by Thread.Start() and thread pool threads, such that an app may do its own thread-spreading. Addressed a rare data corruption that can occur when using new API’s such as Unsafe.ByteOffset which are often used with the new Span types. The corruption could occur when a GC operation is performed while a thread is calling Unsafe.ByteOffset from inside of a loop. SQL Addressed a failure that sometimes occured when a user connects to one Azure SQL database, performed an enclave based operation, and then connected to another database under the same server that has the same Attestation URL and performed an enclave operation on the second server. Windows Forms Addressed a regression introduced in .NET Framework 4.8, where Control.AccessibleName, Control.AccessibleRole, and Control.AccessibleDescription properties stopped working for the following controls:Label, GroupBox, ToolStrip, ToolStripItems, StatusStrip, StatusStripItems, PropertyGrid, ProgressBar, ComboBox, MenuStrip, MenuItems, DataGridView. Addressed a regression in accessible name for combo box items for data bound combo boxes. .NET Framework 4.8 started using type name instead of the value of the DisplayMember property as an accessible name, this improvement uses the DisplayMember again. WCF2 Addressed an issue with WCF services sometimes failing to start when starting multiple services concurrently. 1 Common Language Runtime (CLR) 2 Windows Communication Foundation (WCF) Getting the Update The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog. The Security Only Update is available via Windows Server Update Services and Microsoft Update Catalog. Microsoft Update Catalog You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update. **Note**: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply. The following table is for Windows 10 and Windows Server 2016+ versions. Product Version Cumulative Update Windows 10 Version Next and Windows Server, Version Next .NET Framework 3.5, 4.8 Catalog 4578967 Windows 10, version 20H2 and Windows Server, version 20H2 .NET Framework 3.5, 4.8 Catalog 4578968 Windows 10 2004 and Windows Server, version 2004 .NET Framework 3.5, 4.8 Catalog 4578968 Windows 10 1909 and Windows Server, version 1909 .NET Framework 3.5, 4.8 Catalog 4578974 Windows 10 1903 and Windows Server, version 1903 .NET Framework 3.5, 4.8 Catalog 4578974 Windows 10 1809 (October 2018 Update) and Windows Server 2019 4579976 .NET Framework 3.5, 4.7.2 Catalog 4578966 .NET Framework 3.5, 4.8 Catalog 4578973 Windows 10 1803 (April 2018 Update) .NET Framework 3.5, 4.7.2 Catalog 4580330 .NET Framework 4.8 Catalog 4578972 Windows 10 1709 (Fall Creators Update) .NET Framework 3.5, 4.7.1, 4.7.2 Catalog 4580328 .NET Framework 4.8 Catalog 4578971 Windows 10 1703 (Creators Update) .NET Framework 3.5, 4.7, 4.7.1, 4.7.2 Catalog 4580370 .NET Framework 4.8 Catalog 4578970 Windows 10 1607 (Anniversary Update) and Windows Server 2016 .NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4580346 .NET Framework 4.8 Catalog 4578969 Windows 10 1507 .NET Framework 3.5, 4.6, 4.6.1, 4.6.2 Catalog 4580327 The following table is for earlier Windows and Windows Server versions. Product Version Security and Quality Rollup Security Only Update Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 4579979 4580469 .NET Framework 3.5 Catalog 4578953 Catalog 4578981 .NET Framework 4.5.2 Catalog 4578956 Catalog 4578984 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4578962 Catalog 4578986 .NET Framework 4.8 Catalog 4578976 Catalog 4578989 Windows Server 2012 4579978 4580468 .NET Framework 3.5 Catalog 4578950 Catalog 4578978 .NET Framework 4.5.2 Catalog 4578954 Catalog 4578982 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4578961 Catalog 4578985 .NET Framework 4.8 Catalog 4578975 Catalog 4578988 Windows 7 SP1 and Windows Server 2008 R2 SP1 4579977 4580467 .NET Framework 3.5.1 Catalog 4578952 Catalog 4578980 .NET Framework 4.5.2 Catalog 4578955 Catalog 4578983 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4578963 Catalog 4578987 .NET Framework 4.8 Catalog 4578977 Catalog 4578990 Windows Server 2008 4579980 4580470 .NET Framework 2.0, 3.0 Catalog 4578951 Catalog 4578979 .NET Framework 4.5.2 Catalog 4578955 Catalog 4578983 .NET Framework 4.6 Catalog 4578963 Catalog 4578987   Previous Monthly Rollups The last few .NET Framework Monthly updates are listed below for your convenience: .NET Framework October 1, 2020 Cumulative Update Preview Update for Windows 10, version 2004 and Windows Server, version 2004 .NET Framework September 2020 Cumulative Update Preview Update .NET Framework September 2020 Security and Quality Rollup Updates The post .NET Framework October 2020 Security and Quality Rollup Updates appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 1968
article-image-net-core-october-2020-updates-2-1-23-and-3-1-9-from-net-blog
Matthew Emerick
13 Oct 2020
2 min read
Save for later

.NET Core October 2020 Updates – 2.1.23 and 3.1.9 from .NET Blog

Matthew Emerick
13 Oct 2020
2 min read
Today, we are releasing the .NET Core October 2020 Update. These updates contains reliability and other non-security fixes. See the individual release notes for details on updated packages. Getting the Update .NET Core 3.1.9 and .NET Core SDK ( Download | Release Notes ) .NET Core 2.1.23 and .NET Core SDK ( Download | Release Notes ) See the .NET Core release notes for details on the release, including issues fixed and affected packages.  The latest .NET Core updates are available on the .NET Core download page.   Lifecycle Updates Ubuntu 20.10 and Fedora 33 have been added as a supported OS with this update of .NET Core. Docker Images .NET Docker images have been updated for today’s release. The following repos have been updated. dotnet/core/sdk: .NET Core SDK dotnet/core/aspnet: ASP.NET Core Runtime dotnet/core/runtime: .NET Core Runtime dotnet/core/runtime-deps: .NET Core Runtime Dependencies dotnet/core/samples: .NET Core Samples Note: You must pull updated .NET Core container images to get this update, with either docker pull or docker build --pull. Visual Studio This update will be included in a future update of Visual Studio. Each version of Visual studio is only supported with a given version of the .NET Core SDK. Visual Studio version information is included in the .NET Core SDK download pages and release notes.If you are not using Visual Studio, we recommend using the latest SDK release.   The post .NET Core October 2020 Updates – 2.1.23 and 3.1.9 appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 1871

article-image-announcing-entity-framework-core-ef-core-5-rc2-from-net-blog
Matthew Emerick
13 Oct 2020
5 min read
Save for later

Announcing Entity Framework Core (EF Core) 5 RC2 from .NET Blog

Matthew Emerick
13 Oct 2020
5 min read
Today, the Entity Framework Core team announces the second release candidate (RC2) of EF Core 5.0. This is a feature complete release candidate of EF Core 5.0 and ships with a "go live" license. You are supported using it in production. This is a great opportunity to start using EF Core 5.0 early while there is still time to fix remaining issues. We're looking for reports of any remaining critical bugs that should be fixed before the final release. Prerequisites EF Core 5.0 will not run on .NET Standard 2.0 platforms, including .NET Framework. The release candidates of EF Core 5.0 require .NET Standard 2.1. This means that EF Core 5.0 will run on .NET Core 3.1 and does not require .NET 5. To summarize: EF Core 5.0 runs on platforms that support .NET Standard 2.1. How to get EF Core 5.0 Release Candidate 2 EF Core is distributed exclusively as a set of NuGet packages. For example, to add the SQL Server provider to your project, you can use the following command using the dotnet tool: dotnet add package Microsoft.EntityFrameworkCore.SqlServer --version 5.0.0-rc.2.20475.6 This following table links to the RC2 versions of the EF Core packages and describes what they are used for. Package Purpose Microsoft.EntityFrameworkCore The main EF Core package that is independent of specific database providers Microsoft.EntityFrameworkCore.SqlServer Database provider for Microsoft SQL Server and SQL Azure Microsoft.EntityFrameworkCore.SqlServer.NetTopologySuite SQL Server support for spatial types Microsoft.EntityFrameworkCore.Sqlite Database provider for SQLite that includes the native binary for the database engine Microsoft.EntityFrameworkCore.Sqlite.NetTopologySuite SQLite support for spatial types Microsoft.EntityFrameworkCore.Cosmos Database provider for Azure Cosmos DB Microsoft.EntityFrameworkCore.InMemory The in-memory database provider Microsoft.EntityFrameworkCore.Tools EF Core PowerShell commands for the Visual Studio Package Manager Console; use this to integrate tools like scaffolding and migrations with Visual Studio Microsoft.EntityFrameworkCore.Design Shared design-time components for EF Core tools Microsoft.EntityFrameworkCore.Proxies Lazy-loading and change-tracking proxies Microsoft.EntityFrameworkCore.Abstractions Decoupled EF Core abstractions; use this for features like extended data annotations defined by EF Core Microsoft.EntityFrameworkCore.Relational Shared EF Core components for relational database providers Microsoft.EntityFrameworkCore.Analyzers C# analyzers for EF Core Installing the EF Core Command Line Interface (CLI) As with EF Core 3.0 and 3.1, the EF Core CLI is no longer included in the .NET Core SDK. Before you can execute EF Core migration or scaffolding commands, you'll have to install this package as either a global or local tool. To install the RC2 tool globally the first time, use: dotnet tool install --global dotnet-ef --version 5.0.0-rc.2.20475.6 If you already have the tool installed, update it with: dotnet tool update --global dotnet-ef --version 5.0.0-rc.2.20475.6 It’s possible to use this new version of the EF Core CLI with projects that use older versions of the EF Core runtime. What's New in EF Core 5 RC2 We maintain documentation covering new features introduced into each release. This release included several bug fixes. Daily builds EF Core previews and release candidates are aligned with the .NET 5 release cycle. These releases tend to lag behind the latest work on EF Core. Consider using the daily builds instead to get the most up-to-date EF Core features and bug fixes. As with the previews, the daily builds do not require .NET 5; they can be used with GA/RTM release of .NET Core 3.1. Daily builds are considered stable. Contribute to .NET 5 The .NET documentation team is reorganizing .NET content to better match the workloads you build with .NET. This includes a new .NET Data landing page that will link out to data-related topics ranging from EF Core to APIs, Big Data, and Machine learning. The planning and execution will be done completely in the open on GitHub. This is your opportunity to help shape the hierarchy and content to best fit your needs as a .NET developer. We look forward to your contributions! The EF Core Community Standup The EF Core team is now live streaming every other Wednesday at 10am Pacific Time, 1pm Eastern Time, or 17:00 UTC. Join the stream to ask questions about the EF Core topic of your choice, including the latest release candidate. Visit the .NET Community Standup page to preview upcoming shows and view recordings from past shows. Documentation and Feedback The starting point for all EF Core documentation is docs.microsoft.com/ef/. Please file issues found and any other feedback on the dotnet/efcore GitHub repo. Helpful Short Links The following short links are provided for easy reference and access. Main documentation: https://aka.ms/efdocs Issues and feature requests for EF Core: https://aka.ms/efcorefeedback Entity Framework Roadmap: https://aka.ms/efroadmap What's new in EF Core 5.x? https://aka.ms/efcore5 Thank you from the team A big thank you from the EF team to everyone who has used EF over the years! Arthur Vickers Andriy Svyryd Brice Lambson Jeremy Likness Maurycy Markowski Shay Rojansky Smit Patel   Thank you to our contributors! A huge "thanks" to all the community members who have already contributed code or documentation to the EF Core 5 release! The post Announcing Entity Framework Core (EF Core) 5 RC2 appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 2146

article-image-how-to-analyze-salesforce-service-cloud-data-smarter-with-tableau-dashboard-starters-from-whats-new
Matthew Emerick
13 Oct 2020
5 min read
Save for later

How to analyze Salesforce Service Cloud data smarter with Tableau Dashboard Starters from What's New

Matthew Emerick
13 Oct 2020
5 min read
Boris Busov Solution Engineer Maddie Rawding Solution Engineer Tanna Solberg October 13, 2020 - 4:45pm October 13, 2020 The key to building a customer-focused organization is effective customer service. With every touchpoint, there are opportunities to increase operational efficiency and productivity, improve customer satisfaction, and build customer loyalty. High-performing service teams are 1.6 times more likely to use analytics to improve service. However, there are many pain points to getting started: there’s a wealth of data available coming from a variety of tools, traditional governance models prevent users from accessing data, and on top of everything it can be hard to find insights in complex data. The result is that customer service teams lack direction on how to improve and make their customers happy.  Every department in an organization should be able to understand their data—and customer service organizations are no exception—which is why we’re excited to add the Service Overview and the Case Tracking dashboards to our collection of starters. These two Dashboard Starters are specifically made for the Salesforce Service Cloud and are a great launching pad for anyone introducing analytics to their service organization. Salesforce puts customer experience at the center of every conversation, and now, you can use the power of Tableau’s new Dashboard Starters to discover insights and make data-driven decisions in your service organization.  Getting started with Service Cloud Dashboard Starters All of our Dashboard Starters are available on Tableau Online—simply create a new workbook and connect to Dashboard Starters when you’re building a workbook in Tableau Online (to learn how, follow the steps in this Help article). For Service Cloud, select and open the Service Overview and Case Tracking starters. If you don’t have Tableau Online, you can start a free trial. Alternatively, you can download the Dashboard starters from our website. We have a whole collection of Salesforce Dashboard Starters available for you to try. Service Overview Dashboard Starter Use the Service Overview dashboard to get a high-level rundown of your business across important metrics like CSAT, number of cases, response time, and SLA compliance. Select a metric at the top to filter all of the views on the dashboard and then drill into cases by selecting individual marks on a view. Figure 1: Monitor and drill into key performance metrics with the Service Overview dashboard. With the Service Overview dashboard you can come to a consensus on what good customer service looks like in your organization. Each metric has a customizable target on the dashboard that can be used to set benchmarks for your organization and alerts can be set on Tableau Online for users to get notified. Filter to see information for different time periods, geographies, and more. Figure 2: Set target goals to deliver great service.   Case Tracking Dashboard Starter The Case Tracking dashboard allows agents to monitor their case queue and performance over time. Filter the dashboard to an individual agent and then drill into trends over time to discover potential opportunities for improvement. Figure 3: Explore performance by agent and monitor trends with the Case Tracking dashboard.The Case Tracking dashboard also allows you to drill into case details. Add in your Salesforce URL (make sure the parameter is inputted correctly) and return to the dashboard. Use the arrow on the case details worksheet to jump directly into the case in Salesforce. Figure 4: Drill into case details and then head to Salesforce to take action.  Sharing and customizing the Dashboard Starters These Service Cloud starters are meant to be a starting point. The possibilities are limitless! You can do anything from: Publish your starters and then set alerts and subscriptions to share with your teams.  Add data and create visualizations from other data sources from important source systems to enrich your analysis. Create new KPIs, build custom calculations, and modify the starters to match how your organization provides service. Use custom colors to match your organization's branding. Plugging your own data into the Dashboard Starter These starters use sample data. If you want to add your own data you will need to connect to your Salesforce instance.  Select the Data Source tab. A dialog box will appear prompting you for your application credentials (i.e. Salesforce username and password). Enter your credentials and log in to your account. You’ll need to ensure your account has API access to your Salesforce instance with your Salesforce admin. Now, go back to the dashboard. Tableau Desktop will then create an extract of your data. This process will vary based on how much data you have in your Salesforce instance. If any worksheets appear blank, navigate to the blank worksheet. Replace reference fields by right-clicking on the fields with red exclamation marks as necessary. Making the most of the new Service Overview and Case Tracking Dashboard Starters can organize and analyze the wealth of data gained from every customer interaction. Being able to elevate insights empowers service teams to take action, resulting in lower call volumes, faster resolution times, and improved workflows. From the release of the Tableau Viz Lightning Web Component to the enhancements in Tableau’s connector to Salesforce, there’s never been a better time to start analyzing your data in Tableau and these Dashboard Starters are just the beginning of what is to come with Tableau and Salesforce. Additional resources: Connect to Salesforce data in Tableau Documentation on Dashboard Starters for cloud-based data Overview of Tableau Dashboard Starters Tableau Viz Lightning Web Component What is Salesforce Service Cloud? Tableau resources for customer service teams
Read more
  • 0
  • 0
  • 817
article-image-announcing-net-5-0-rc-2-from-net-blog
Matthew Emerick
13 Oct 2020
12 min read
Save for later

Announcing .NET 5.0 RC 2 from .NET Blog

Matthew Emerick
13 Oct 2020
12 min read
Today, we are shipping .NET 5.0 Release Candidate 2 (RC2). It is a near-final release of .NET 5.0, and the last of two RCs before the official release in November. RC2 is a “go live” release; you are supported using it in production. At this point, we’re looking for reports of any remaining critical bugs that should be fixed before the final release. We also released new versions of ASP.NET Core and EF Core today. You can download .NET 5.0, for Windows, macOS, and Linux: Installers and binaries Container images Snap installer Release notes Known issues GitHub issue tracker You need the latest preview version of Visual Studio (including Visual Studio for Mac) to use .NET 5.0. .NET 5.0 includes many improvements, notably single file applications, smaller container images, more capable JsonSerializer APIs, a complete set of nullable reference type annotations, new target framework names, and support for Windows ARM64. Performance has been greatly improved, in the NET libraries, in the GC, and the JIT. ARM64 was a key focus for performance investment, resulting in much better throughput and smaller binaries. .NET 5.0 includes new language versions, C# 9 and F# 5.0. Check out some .NET 5.0 examples so you can try these features out for yourself. Today is an auspicious day because we’re kicking off the 2020 .NET@Microsoft internal conference. There will be many speakers from the .NET team, but also developers and architects from services teams that rely on .NET to power the Microsoft cloud, sharing their victories and also their challenges. I’m presenting (unsurprisingly) “What’s new in .NET 5.0”. My talk will be easy; I’ll just read the .NET 5.0 blog posts, preview by preview! It will be a great talk. More seriously, the conference is our opportunity to make the case why Microsoft teams should adopt .NET 5.0 soon after it is available. At least one large team I know of is running on RC1 in production. The official .NET Microsoft site has been running on .NET 5.0 since Preview 1. It is now running RC2. The case we’ll make to Microsoft teams this week is very similar to the case that I’ve intended to make to you across all of these .NET 5.0 blog posts. .NET 5.0 is a great release and will improve the fundamentals of your app. Speaking of conferences, please save the date for .NET Conf 2020. This year, .NET 5.0 will launch at .NET Conf 2020! Come celebrate and learn about the new release. We’re also celebrating our 10th anniversary and we’re working on a few more surprises. You won’t want to miss this one. Just like I did for .NET 5.0 Preview 8 and .NET 5.0 RC1, I’ve chosen a selection of features to look at in more depth and to give you a sense of how you’ll use them in real-world usage. This post is dedicated to C# 9 pattern matching, Windows ARM64, and ClickOnce. C# 9 Pattern Matching Pattern matching is a language feature was first added in C# 7.0. It’s best to let Mads reintroduce the concept. This is what he had to say when he originally introduced the feature. C# 7.0 introduces the notion of patterns, which, abstractly speaking, are syntactic elements that can test that a value has a certain “shape”, and extract information from the value when it does. That’s a really great description, perfectly worded. The C# team has added new patterns in each of the C# 7, C# 8, and C# 9 versions. In this post, you’ll see patterns from each of those language versions, but we’ll focus on the new patterns in C# 9. The three new patterns in C# 9 are: Relational patterns, using relational operators such as < and >=. Logical patterns, using the keywords and, or, and not. The poster child example is foo is not null. This type of pattern is most useful when you want to compare multiple things in one pattern. Simple type patterns, using solely a type and no other syntax for matching. I’m a big fan of the BBC Sherlock series. I’ve written a small app that determines if a given character should have access to a given piece of content within that series. Easy enough. The app is written with two constraints: stay true to the show timeline and characters, and be a great demonstration of patterns. If anything, I suspect I’ve failed most on the second constraint. You’ll find a broader set of patterns and styles than one would expect in a given app (particularly such a small one). When I’m using patterns, I sometimes want to do something subtly different than a pattern I’m familiar with achieves and am not sure how to extend that pattern to satisfy my goal. Given this sample, I’m hoping you’ll discover more approaches than perhaps you were aware of before, and can extend your repertoire of familiar patterns. There are two switch expressions within the app. Let’s start with the smaller of the two. public static bool IsAccessOKAskMycroft(Person person) => person switch { // Type pattern OpenCaseFile f when f.Name == "Jim Moriarty" => true, // Simple type pattern Mycroft => true, _ => false, }; The first two patterns are type patterns. The first pattern is supported with C# 8. The second one — Mycroft — is an example of the new simple type pattern. With C# 8, this pattern would require an identifier, much like the first pattern, or at the very least a discard such as Mycroft _. In C# 9, the identifier is no longer needed. Yes, Mycroft is a type in the app. Let’s keep to simple a little longer, before I show you the other switch expression. The following if statement demonstrates a logical pattern, preceded by two instances of a type pattern using is. if (user is Mycroft m && m.CaresAbout is not object) { Console.WriteLine("Mycroft dissapoints us again."); } The type isn’t known, so the user variable is tested for the Mycroft type and is then assigned to m if that test passes. A property on the Mycroft object is tested to be not an object. A test for null would have also worked, but wouldn’t have demonstrated a logical pattern. The other switch expression is a lot more expansive. public static bool IsAccessOkOfficial(Person user, Content content, int season) => (user, content, season) switch { // Tuple + property patterns ({Type: Child}, {Type: ChildsPlay}, _) => true, ({Type: Child}, _, _) => false, (_ , {Type: Public}, _) => true, ({Type: Monarch}, {Type: ForHerEyesOnly}, _) => true, // Tuple + type patterns (OpenCaseFile f, {Type: ChildsPlay}, 4) when f.Name == "Sherlock Holmes" => true, // Property and type patterns {Item1: OpenCaseFile {Type: var type}, Item2: {Name: var name}} when type == PoorlyDefined && name.Contains("Sherrinford") && season >= 3 => true, // Tuple and type patterns (OpenCaseFile, var c, 4) when c.Name.Contains("Sherrinford") => true, // Tuple, Type, Property and logical patterns (OpenCaseFile {RiskLevel: >50 and <100 }, {Type: StateSecret}, 3) => true, _ => false, }; The only really interesting pattern is the very last one (before the discard: -), which tests for a Risklevel that is >50 and <100. There are many times I’ve wanted to write an if statement with that form of logical pattern syntax without needing to repeat a variable name. This logical pattern could also have been written in the following way instead and would have more closely matched the syntax demonstrated in the C# 9 blog post. They are equivalent. (OpenCaseFile {RiskLevel: var riskLevel}, {Type: StateSecret}, 3) when riskLevel switch { >50 and <100 => true, _ => false } => true, I’m far from a language expert. Jared Parsons and Andy Gocke gave me a lot of help with this section of the post. Thanks! The key stumbling block I had was with a switch on a tuple. At times, the positional pattern is inconvenient, and you only want to address one part of the tuple. That’s where the property pattern comes in, as you can see in the following code. {Item1: OpenCaseFile {Type: var type}, Item2: {Name: var name}} when type == PoorlyDefined && name.Contains("Sherrinford") && season >= 3 => true, There is a fair bit going on there. The key point is that the tuple properties are being tested, as opposed to matching a tuple positionally. That approaches provides a lot more flexibility. You are free to intermix these approaches within a given switch expression. Hopefully that helps someone. It helped me. If you are curious about what the app does, I’ve saved the output of the program in the app gist. You can also run the app for yourself. I believe it requires .NET 5.0 RC2 to run. If there has been a pattern with the last three C# (major) versions, it has been patterns. I certainly hope the C# team matches that pattern going forward. I imagine it is the shape of things, and there are certainly more values to extract. ClickOnce ClickOnce has been a popular .NET deployment option for many years. It’s now supported for .NET Core 3.1 and .NET 5.0 Windows apps. We knew that many people would want to use ClickOnce for application deployment when we added Windows Forms and WPF support to .NET Core 3.0. In the past year, the .NET and Visual Studio teams worked together to enable ClickOnce publishing, both at the command line and in Visual Studio. We had two goals from the start of the project: Enable a familiar experience for ClickOnce in Visual Studio. Enable a modern CI/CD for ClickOnce publishing with command-line flows, with either MSBuild or the Mage tool. It’s easiest to show you the experience in pictures. Let’s start with the Visual Studio experience, which is centered around project publishing. You need to publish to a Folder target. The primary deployment model we’re currently supporting is framework dependent apps. It is easy to take a dependency on the .NET Desktop Runtime (that’s the one that contains WPF and Windows Forms). Your ClickOnce installer will install the .NET runtime on user machines if it is needed. We also intend to support self-contained and single file apps.   You might wonder if you can still be able to take advantage of ClickOnce offline and updating features. Yes, you can. The same install locations and manifest signing features are included. If you have strict signing requirements, you will be covered with this new experience. Now, let’s switch to the command line Mage experience. The big change with Mage is that it is now a .NET tool, distributed on NuGet. That means you don’t need to install anything special on your machine. You just need the .NET 5.0 SDK and then you can install Mage as a .NET tool. You can use it to publish .NET Framework apps as well, however, SHA1 signing and partial trust support have been removed. The Mage installation command follows: dotnet tool install -g Microsoft.DotNet.Mage The following commands configure and publish a sample application. The next command launches the ClickOnce application. And then the familiar ClickOnce installation dialog appears. After installing the application, the app will be launched. After re-building and re-publishing the application, users will see an update dialog. And from there, the updated app will be launched. Note: The name of the Mage .NET tool will change from mage.net to dotnet-mage for the final release. The NuGet package name will remain the same. This quick lap around ClickOnce publishing and installation should give you a good idea of how you might use ClickOnce. Our intention has been to enable a parity experience with the existing ClickOnce support for .NET Framework. If you find that we haven’t lived up to that goal, please tell us. ClickOnce browser integration is the same as with .NET Framework, supported in Edge and Internet Explorer. Please tell us how important it is to support the other browsers for your users. Windows Arm64 MSI installers are now available for Windows Arm64, as you can see in the following image of the .NET 5.0 SDK installer. To further prove the point, I ran the dotnet-runtimeinfo tool on my Arm64 machine to demonstrate the configuration. C:Usersrich>dotnet tool install -g dotnet-runtimeinfo You can invoke the tool using the following command: dotnet-runtimeinfo Tool 'dotnet-runtimeinfo' (version '1.0.2') was successfully installed. C:Usersrich>dotnet-runtimeinfo **.NET information Version: 5.0.0 FrameworkDescription: .NET 5.0.0-rc.2.20475.5 Libraries version: 5.0.0-rc.2.20475.5 Libraries hash: c5a3f49c88d3d907a56ec8d18f783426de5144e9 **Environment information OSDescription: Microsoft Windows 10.0.18362 OSVersion: Microsoft Windows NT 10.0.18362.0 OSArchitecture: Arm64 ProcessorCount: 8 The .NET 5.0 SDK does not currently contain the Windows Desktop components — Windows Forms and WPF — on Windows Arm64. This late change was initially shared in the .NET 5.0 Preview 8 post. We are hoping to add the Windows desktop pack for Windows Arm64 in a 5.0 servicing update. We don’t currently have a date to share. For now, the SDK, console and ASP.NET Core applications are supported on Windows Arm64. Closing We’re now so close to finishing off this release, and sending it out for broad production use. We believe it is ready. The production use that it is already getting at Microsoft brings us a lot of confidence. We’re looking forward to you getting the chance to really take advantage of .NET 5.0 in your own environment. It’s been a long time since we’ve shared our social media pages. If you are on social media, check out the dotnet pages we maintain: Twitter Facebook The post Announcing .NET 5.0 RC 2 appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 2092

article-image-an-introduction-to-android-gpu-inspector-for-android-game-development-from-android-development-android-authority
Matthew Emerick
13 Oct 2020
4 min read
Save for later

An introduction to Android GPU Inspector for Android game development from Android Development – Android Authority

Matthew Emerick
13 Oct 2020
4 min read
Credit: Adam Sinicki / Android Authority If you want your game or app to stand out in the Google Play Store, having incredible graphics is one of the surest strategies. In fact, many users download games purely for their graphical fidelity – especially if their handset is new and they want to see what it can do! See also: The beginner’s guide to Android game development: Everything you need to know Finding tricks to eek the most performance possible out of a device can therefore be very useful. Fortunately, Google and its partners provide many tools for the job, including Android GPU Inspector. What is AGI? Android GPU Inspector (AGI) is a graphics profiling tool that lets developers see precisely what’s going on inside their devices when running applications. More specifically, it exposes a large amount of information regarding GPU performance. Because AGI is now in open Beta, that means developers are free to start playing around with it. As long as they have the right hardware that is! Android GPU Inspector will currently work only with Google Pixel 4 (and XL) and requires Android 11 (no emulators either). Of course, the list is limited in beta now, but eventually, all devices should be supported. Check back here for updates, or make a note of the official list of Supported Devices. See also: How to make a game in Unity: it starts with a simple 3D maze game Once you learn to read the many counters that Android GPU Inspector provides, you’ll be able to identify and solve performance issues. You can see whether your application is GPU or CPU bound, whether the bottleneck is linked with excessive geometry or overly large textures, and much more. You can then use that information to optimize your apps for greater performance. BothVulkan and OpenGL ES applications are supported. How to use Android GPU Inspector Getting started with Android GPU Inspector is straightforward. Head over to GPUInspector.dev and download the latest version for your given operating system. You’ll need the Android Debug Bridge (ADB) installed. It will act as the conduit between the Android device and the desktop PC running AGI. ADB comes with the Android SDK, so if you’re a developer you should already be familiar with it. Otherwise, check our guide to the Android SDK for beginners! Make sure that the application is debuggable (using the debuggable attribute in the Android Manifest). Connect the device (with ADB debugging enabled) and launch AGI. When AGI boots up, you’ll be prompted to add the ADB path. This should be in your Android SDK folder, under Platform Tools. Once you’ve done that, click on “Capture a new trace.” You’ll be brought to the capture menu option, where you can select your device and the application you want to trace. AGI comes with a minimal Vulkan application that you can use as a test. Under “Type” choose “System Profile.” Under “Start and Duration” choose “Manual” and “2” respectively. Under “Trace Options” click “Configure” to access a window where you can select all of the profiling data you want to expose. Finally, choose where you want the capture file to be stored under “Output”. With that done, click “OK” and let AGI do its thing. After a couple of seconds, the process will terminate. Once that’s done, click on “Open Trace” to see all the juicy data for yourself! If you’re anything like me, you may find all of this information to be a little overwhelming! Fortunately, you can find a full breakdown of what each of the counters means right here.
Read more
  • 0
  • 0
  • 1805