Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-amazon-s3-update-three-new-security-access-control-features-from-aws-news-blog
Matthew Emerick
02 Oct 2020
5 min read
Save for later

Amazon S3 Update – Three New Security & Access Control Features from AWS News Blog

Matthew Emerick
02 Oct 2020
5 min read
A year or so after we launched Amazon S3, I was in an elevator at a tech conference and heard a couple of developers use “just throw it into S3” as the answer to their data storage challenge. I remember that moment well because the comment was made so casually, and it was one of the first times that I fully grasped just how quickly S3 had caught on. Since that launch, we have added hundreds of features and multiple storage classes to S3, while also reducing the cost to storage a gigabyte of data for a month by almost 85% (from $0.15 to $0.023 for S3 Standard, and as low as $0.00099 for S3 Glacier Deep Archive). Today, our customers use S3 to support many different use cases including data lakes, backup and restore, disaster recovery, archiving, and cloud-native applications. Security & Access Control As the set of use cases for S3 has expanded, our customers have asked us for new ways to regulate access to their mission-critical buckets and objects. We added IAM policies many years ago, and Block Public Access in 2018. Last year we added S3 Access Points (Easily Manage Shared Data Sets with Amazon S3 Access Points) to help you manage access in large-scale environments that might encompass hundreds of applications and petabytes of storage. Today we are launching S3 Object Ownership as a follow-on to two other S3 security & access control features that we launched earlier this month. All three features are designed to give you even more control and flexibility: Object Ownership – You can now ensure that newly created objects within a bucket have the same owner as the bucket. Bucket Owner Condition – You can now confirm the ownership of a bucket when you create a new object or perform other S3 operations. Copy API via Access Points – You can now access S3’s Copy API through an Access Point. You can use all of these new features in all AWS regions at no additional charge. Let’s take a look at each one! Object Ownership With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources. With this model, the bucket owner does not have full control over the objects in the bucket and cannot use bucket policies to share objects, which can lead to confusion. You can now use a new per-bucket setting to enforce uniform object ownership within a bucket. This will simplify many applications, and will obviate the need for the Lambda-powered self-COPY that has become a popular way to do this up until now. Because this setting changes the behavior seen by the account that is uploading, the PUT request must include the bucket-owner-full-control ACL. You can also choose to use a bucket policy that requires the inclusion of this ACL. To get started, open the S3 Console, locate the bucket and view its Permissions, click Object Ownership, and Edit: Then select Bucket owner preferred and click Save: As I mentioned earlier, you can use a bucket policy to enforce object ownership (read About Object Ownership and this Knowledge Center Article to learn more). Many AWS services deliver data to the bucket of your choice, and are now equipped to take advantage of this feature. S3 Server Access Logging, S3 Inventory, S3 Storage Class Analysis, AWS CloudTrail, and AWS Config now deliver data that you own. You can also configure Amazon EMR to use this feature by setting fs.s3.canned.acl to BucketOwnerFullControl in the cluster configuration (learn more). Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics. AWS CloudFormation support for Object Ownership is under development and is expected to be ready before AWS re:Invent. Bucket Owner Condition This feature lets you confirm that you are writing to a bucket that you own. You simply pass a numeric AWS Account ID to any of the S3 Bucket or Object APIs using the expectedBucketOwner parameter or the x-amz-expected-bucket-owner HTTP header. The ID indicates the AWS Account that you believe owns the subject bucket. If there’s a match, then the request will proceed as normal. If not, it will fail with a 403 status code. To learn more, read Bucket Owner Condition. Copy API via Access Points S3 Access Points give you fine-grained control over access to your shared data sets. Instead of managing a single and possibly complex policy on a bucket, you can create an access point for each application, and then use an IAM policy to regulate the S3 operations that are made via the access point (read Easily Manage Shared Data Sets with Amazon S3 Access Points to see how they work). You can now use S3 Access Points in conjunction with the S3 CopyObject API by using the ARN of the access point instead of the bucket name (read Using Access Points to learn more). Use Them Today As I mentioned earlier, you can use all of these new features in all AWS regions at no additional charge. — Jeff;  
Read more
  • 0
  • 0
  • 1664

article-image-python-3-5-is-no-longer-supported-from-python-insider
Matthew Emerick
02 Oct 2020
1 min read
Save for later

Python 3.5 is no longer supported from Python Insider

Matthew Emerick
02 Oct 2020
1 min read
Python 3.5 is no longer supported.  There will be no more bug fixes or security patches for the 3.5 series, and Python 3.5.10 is the last release.  The Python core development community recommends that all remaining Python 3.5 users should upgrade to the latest version.
Read more
  • 0
  • 0
  • 1381

article-image-join-hacktoberfest-at-the-xamarin-community-toolkit-from-xamarin-blog
Matthew Emerick
02 Oct 2020
3 min read
Save for later

Join Hacktoberfest at the Xamarin Community Toolkit from Xamarin Blog

Matthew Emerick
02 Oct 2020
3 min read
You may have heard about the Xamarin Community Toolkit by now. This toolkit will be an important part of the Xamarin.Forms ecosystem during the evolution to .NET 6. Now is the time to contribute since it is “Hacktoberfest” after all! What is the Xamarin Community Toolkit? Since Xamarin.Forms 5 will be the last major version of Forms before .NET 6, we wanted to have an intermediate library that can still add value for Forms in the meanwhile. However, why stop there? There is also a lot of converters, behaviors, effects, etc. that everyone is continually rewriting. To help avoid this, we consolidated all of those into the Xamarin Community Toolkit. This toolkit already has a lot of traction and support from our wonderful community, but there is always room for more! Lucky for us, that time of year which makes contributing extra special is upon us again. Hacktoberfest 2020 For Hacktoberfest we welcome you all to join us and plant that tree (new reward by Hacktoberfest!) or earn that t-shirt while giving some of your valuable time to our library. On top of that, we will offer some swag whenever you decide to contribute. When you do, we will reach out to you near the end of October and if your PRs are eligible to make sure we get your details. That means: no need to do anything special, just crush that code! How to Get Involved? Head over to the Xamarin Community Toolkit repository. Find an issue you want to work on and comment that you will be taking responsibility for that issue. It might be that your issue is not on there yet, please feel free to add it. Please await confirmation of your issue, which typically happens within 24 hours. Socialize your new issue on Twitter with the hashtag #XamarinCommunityToolkit A couple things to note: We appreciate any and all contributions. However, fixing typos, “README”s, or similar documents does NOT count towards a rewardable contribution. All pull-requests should be opened between October 2nd – November 1st, 2020. Have questions? Just ask! Feel free to contact us through the Discord server. You can also reach out directly on Twitter: @jfversluis. Additionally, you can open an issue. Quality Pull-Requests Anything that substantially improves the quality of the product. It should be more than fixing typos. Approved Items of Work Any open “bug” issue that has been verified, or enhancement spec that has some indication it is approved. If you have any question, please contact us. Since the Toolkit was launched recently, we apologize in advance if some of the issues are mislabeled. If you are unsure about anything, just comment and a member of our team will reach out with guidance. Get Started Thank you so much for your interest, we look forward to all of your great contributions this year! The post Join Hacktoberfest at the Xamarin Community Toolkit appeared first on Xamarin Blog.
Read more
  • 0
  • 0
  • 1117

article-image-september-2020-rewind-from-linux-com
Matthew Emerick
02 Oct 2020
1 min read
Save for later

September 2020 rewind from Linux.com

Matthew Emerick
02 Oct 2020
1 min read
Click to Read More at Enable Sysadmin The post September 2020 rewind appeared first on Linux.com.
Read more
  • 0
  • 0
  • 703
Banner background image

article-image-akraino-an-open-source-project-for-the-edge-from-linux-com
Matthew Emerick
01 Oct 2020
5 min read
Save for later

Akraino: An Open Source Project for the Edge from Linux.com

Matthew Emerick
01 Oct 2020
5 min read
Akraino is an open-source project designed for the Edge community to easily integrate open source components into their stack. It’s a set of open infrastructures and application blueprints spanning a broad variety of use cases, including 5G, AI, Edge IaaS/PaaS, IoT, for both provider and enterprise Edge domains. We sat down with Tina Tsou, TSC Co-Chair of the Akraino project to learn more about it and its community. Here is a lightly edited transcript of the interview: Swapnil Bhartiya: Today, we have with us Tina Tsou, TSC Co-Chair of the Akraino project. Tell us a bit about the Akraino project. Tina Tsou: Yeah, I think Akraino is an Edge Stack project under Linux Foundation Edge. Before Akraino, the developers had to go to the upstream community to download the upstream software components and integrate in-store to test. With the blueprint ideas and concept, the developers can directly do the use-case base to blueprint, do all the integration, and [have it] ready for the end-to-end deployment for Edge. Swapnil Bhartiya: The blueprints are the critical piece of it. What are these blueprints and how do they integrate with the whole framework? Tina Tsou: Based on the certain use case, we do the community CI/CD ( continuous integration and continuous deployment). We also have proven security requirements. We do the community lab and we also do the life cycle management. And then we do the production quality, which is deployment-ready. Swapnil Bhartiya: Can you explain what the Edge computing framework looks like? Tina Tsou: We have four segments: Cloud, Telco, IoT, and Enterprise. When we do the framework, it’s like we have a framework of the Edge compute in general, but for each segment, they are slightly different. You will see in the lower level, you have the network, you have the gateway, you have the switches. In the upper of it, you have all kinds of FPGA and then the data plan. Then, you have the controllers and orchestration, like the Kubernetes stuff and all kinds of applications running on bare metal, virtual machines or the containers. By the way, we also have the orchestration on the site. Swapnil Bhartiya: And how many blueprints are there? Can you talk about it more specifically? Tina Tsou: I think we have around 20-ish blueprints, but they are converged into blueprint families. We have a blueprint family for telco appliances, including Radio Edge Cloud, and SEBA that has enabled broadband access. We also have a blueprint for Network Cloud. We have a blueprint for Integrated Edge Cloud. We have a blueprint for Edge Lite IoT. So, in this case, the different blueprints in the same blueprint family can share the same software framework, which saves a lot of time. That means we can deploy it at a large scale. Swapnil Bhartiya: The software components, which you already talked about in each blueprint, are they all in the Edge project or there are some components from external projects as well? Tina Tsou: We have the philosophy of upstream first. If we can find it from the upstream community, we just directly take it from the upstream community and install and integrate it. If we find something that we need, we go to the upstream community to see whether it can be changed or updated there. Swapnil Bhartiya: How challenging or easy it is to integrate these components together, to build the stack? Tina Tsou: It depends on which group and family we are talking about. I think most of them at the middle level of middle are not too easy, not too complex. But the reference has to create the installation, like the YAML files configuration and for builds on ISO images, some parts may be more complex and some parts will be easy to download and integrate. Swapnil Bhartiya: We have talked about the project. I want to talk about the community. So first of all, tell us what is the role of TSC? Tina Tsou: We have a whole bunch of documentation on how TSA runs if you want to read. I think the role for TSC is more tactical steering. We have a chair and co-chair, and there are like 6-7 subcommittees for specific topics like security, technical community, CI and documentation process. Swapnil Bhartiya: What kind of community is there around the Akraino project? Tina Tsou: I think we have a pretty diverse community. We have the end-users like the telcos and the hyperscalers, the internet companies, and also enterprise companies. Then we have the OEM/ODM vendors, the chip makers or the SoC makers. Then have the IP companies and even some universities. Swapnil Bhartiya: Tina, thank you so much for taking the time today to explain the Akraino project and also about the blueprints, the community, and the roadmap for the project. I look forward to seeing you again to get more updates about the project. Tina Tsou: Thank you for your time. I appreciate it. The post Akraino: An Open Source Project for the Edge appeared first on Linux.com.
Read more
  • 0
  • 0
  • 730

article-image-net-framework-october-1-2020-cumulative-update-preview-update-for-windows-10-version-2004-and-windows-server-version-2004-from-net-blog
Matthew Emerick
01 Oct 2020
3 min read
Save for later

.NET Framework October 1, 2020 Cumulative Update Preview Update for Windows 10, version 2004 and Windows Server, version 2004 from .NET Blog

Matthew Emerick
01 Oct 2020
3 min read
Today, we are releasing the September 2020 Cumulative Update Preview Updates for .NET Framework. Quality and Reliability This release contains the following quality and reliability improvements. ASP.NET Disabled resuse of AppPathModifier in ASP.Net control output. HttpCookie objects in the ASP.Net request context will be created with configured defaults for cookie flags instead instead of .Net.NET-style primitive defaults to match the behavior of `new HttpCookie(name)`. CLR1 Added a CLR config variable Thread_AssignCpuGroups (1 by default) that can be set to 0 to disable automatic CPU group assignment done by the CLR for new threads created by Thread.Start() and thread pool threads, such that an app may do its own thread-spreading. Addressed a rare data corruption that can occur when using new API’s such as Unsafe.ByteOffset which are often used with the new Span types. The corruption could occur when a GC operation is performed while a thread is calling Unsafe.ByteOffset from inside of a loop. Addressed an issue regarding timers with very long due times ticking down much sooner than expected when the AppContext switch “Switch.System.Threading.UseNetCoreTimer” is enabled. SQL Addressed a failure that sometimes occured when a user connected to one Azure SQL database, performed an enclave based operation, and then connected to another database under the same server that has the same Attestation URL and performed an enclave operation on the second server. WCF2 Addressed an issue with WCF services sometimes failing to start when starting multiple services concurrently. Windows Forms Addressed a regression introduced in .NET Framework 4.8, where Control.AccessibleName, Control.AccessibleRole, and Control.AccessibleDescription properties stopped working for the following controls:Label,GroupBox,ToolStrip,ToolStripItems,StatusStrip,StatusStripItems,PropertyGrid,ProgressBar,ComboBox,MenuStrip,MenuItems,DataGridView. Addressed a regression in accessible name for combo box items for data bound combo boxes. .NET Framework 4.8 RTM started using type name instead of the value of the DisplayMember property as an accessible name, this fiximprovement uses the DisplayMember again. 1 Common Language Runtime (CLR) 2 Windows Communication Foundation (WCF) Getting the Update The Cumulative Update Preview is available via Windows Update and Microsoft Update Catalog. Microsoft Update Catalog You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update and Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update. **Note**: Customers that rely on Windows Update will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply. The following table is for Windows 10 and Windows Server 2016+ versions. Product Version Cumulative Update Windows 10 2004 and Windows Server, version 2004 .NET Framework 3.5, 4.8 Catalog 4576945   Previous Cumulative Updates The last few .NET Framework updates are listed below for your convenience: .NET Framework September 2020 Security and Quality Rollup Updates .NET Framework September 3, 2020 Cumulative Update Preview for Windows 10 2004 and Windows Server, version 2004 .NET Framework August 2020 Cumulative Update Preview .NET Framework August 2020 Security and Quality Rollup Updates The post .NET Framework October 1, 2020 Cumulative Update Preview Update for Windows 10, version 2004 and Windows Server, version 2004 appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 822
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-net-conf-2020-and-community-events-this-october-from-xamarin-blog
Anonymous
01 Oct 2020
4 min read
Save for later

.NET Conf 2020 and Community Events this October from Xamarin Blog

Anonymous
01 Oct 2020
4 min read
Virtually tune-in to communities around the world through amazing online events, streams, and recordings this October. Stay connected to your developer communities through the upcoming .NET Conf 2020, virtual Meetups, community stand-ups, podcasts, and more! As well as discovering ways to get started with .NET tutorials and hosting your own virtual experiences. .NET Conf 2020 The .NET team and the .NET Foundation are excited to present .NET Conf 2020 – a free, 3-day, livestream event all about .NET development! This year is going to be extra special as .NET 5.0 launches on the 10-year anniversary of this virtual conference happening November 10-12th, 2020! Host a Virtual Event The .NET Foundation is supporting virtual events run by communities from around the world. Organize a virtual event or Meetup for your community between November 13, 2020 – January 31, 2021 to get the following support: Promotion of your event on the .NET Conf event site and through the .NET Virtual User Group. Technical content to present/promote to your local groups through an “Event in a Box”. Support with online streaming to the .NET Foundation’s YouTube channel, if needed. Digital SWAG and additional offers from our partners. How to Participate Fill out this form to let us know about your virtual event. Share your stories and events via the hashtag #dotnetconf on Twitter. Share our Facebook event with your friends to help spread the word! For questions, please email dotnetconf@dotnetfoundation.org Important note: Please be advised request of speakers and streaming support for virtual .NET Conf 2020 events resources are provided based on speaker and streaming support team availability, as well as a “first-come-first-served” basis. Join us on November 10th at 08:00 PST for the live keynote on www.dotnetconf.net. Share your stories, watch the stream, and have fun learning! Virtual User Group Events Around the World Discover events happening all over the globe and all throughout the month via the .NET Foundation Community Meetup site. Get assistance with supporting your own local user group by joining the .NET Foundation Meetup Pro account. Additionally, the .NET Foundation is helping .NET user groups go virtual through the .NET Virtual User Group program! .NET Foundation’s .NET Virtual User Group Program Let the .NET Foundation take care of all of the streaming so you can focus on enabling developers around the world to join into YOUR event. Submit your user group session, get scheduled, and get your event promoted! It is a great way to engage with the broader .NET community while keeping your user group active and maintaining social distance. Join the .NET Virtual User Groups right now! Find awesome upcoming user group sessions from around the world and right in your hometown. Watch the .NET Team Stream Discover new live community stand-ups every week! Catch the Xamarin community live-streaming on Twitch daily: Get the full list of .NET streamers throughout the week. Be sure to click that “Follow” button to stay up to date. Join a .NET Stand-up Multiple times a week, the .NET teams meet online to talk about the latest news across all .NET products. Find live, upcoming, and past episodes on the new .NET Community Stand-up website. Including the upcoming Xamarin Community Stand-up at 10/1/2020, 1:00:00 PM EST. Topic: Community member Theodora shows off her College Diary app Subscribe to the Xamarin Podcast Join your co-hosts, Matt Soucoup and James Montemagno, to catch up on the latest and greatest in Android, Xamarin, and cloud development. Get all the updates about Xamarin.Forms, recent features, and future releases in a jiffy. Subscribe to the Xamarin Podcast on iTunes, Spotify, Google Play Music, Stitcher, or your favorite podcast app. Get Started with Xamarin and .NET Begin your journey of .NET Core with experts like Scott Hanselman, Kendra Havens, and many others that help walk you through the beginning stages of learning .NET. Such as how to get started with .NET, Xamarin, ASP.NET, NuGet, and even build your first app! Discover the exciting world of development with this Beginner series for .NET developers today. The post .NET Conf 2020 and Community Events this October appeared first on Xamarin Blog.
Read more
  • 0
  • 0
  • 1175

article-image-store-and-access-time-series-data-at-any-scale-with-amazon-timestream-now-generally-available-from-aws-news-blog
Matthew Emerick
01 Oct 2020
10 min read
Save for later

Store and Access Time Series Data at Any Scale with Amazon Timestream – Now Generally Available from AWS News Blog

Matthew Emerick
01 Oct 2020
10 min read
Time series are a very common data format that describes how things change over time. Some of the most common sources are industrial machines and IoT devices, IT infrastructure stacks (such as hardware, software, and networking components), and applications that share their results over time. Managing time series data efficiently is not easy because the data model doesn’t fit general-purpose databases. For this reason, I am happy to share that Amazon Timestream is now generally available. Timestream is a fast, scalable, and serverless time series database service that makes it easy to collect, store, and process trillions of time series events per day up to 1,000 times faster and at as little as to 1/10th the cost of a relational database. This is made possible by the way Timestream is managing data: recent data is kept in memory and historical data is moved to cost-optimized storage based on a retention policy you define. All data is always automatically replicated across multiple availability zones (AZ) in the same AWS region. New data is written to the memory store, where data is replicated across three AZs before returning success of the operation. Data replication is quorum based such that the loss of nodes, or an entire AZ, does not disrupt durability or availability. In addition, data in the memory store is continuously backed up to Amazon Simple Storage Service (S3) as an extra precaution. Queries automatically access and combine recent and historical data across tiers without the need to specify the storage location, and support time series-specific functionalities to help you identify trends and patterns in data in near real time. There are no upfront costs, you pay only for the data you write, store, or query. Based on the load, Timestream automatically scales up or down to adjust capacity, without the need to manage the underlying infrastructure. Timestream integrates with popular services for data collection, visualization, and machine learning, making it easy to use with existing and new applications. For example, you can ingest data directly from AWS IoT Core, Amazon Kinesis Data Analytics for Apache Flink, and Amazon MSK. You can visualize data stored in Timestream from Amazon QuickSight, and use Amazon SageMaker to apply machine learning algorithms to time series data, for example for anomaly detection. You can use Timestream fine-grained AWS Identity and Access Management (IAM) permissions to easily ingest or query data from an AWS Lambda function. We are providing the tools to use Timestream with open source platforms such as Apache Kafka, Telegraf, Prometheus, and Grafana. Using Amazon Timestream from the Console In the Timestream console, I select Create database. I can choose to create a Standard database or a Sample database populated with sample data. I proceed with a standard database and I name it MyDatabase. All Timestream data is encrypted by default. I use the default master key, but you can use a customer managed key that you created using AWS Key Management Service (KMS). In that way, you can control the rotation of the master key, and who has permissions to use or manage it. I complete the creation of the database. Now my database is empty. I select Create table and name it MyTable. Each table has its own data retention policy. First data is ingested in the memory store, where it can be stored from a minimum of one hour to a maximum of a year. After that, it is automatically moved to the magnetic store, where it can be kept up from a minimum of one day to a maximum of 200 years, after which it is deleted. In my case, I select 1 hour of memory store retention and 5 years of magnetic store retention. When writing data in Timestream, you cannot insert data that is older than the retention period of the memory store. For example, in my case I will not be able to insert records older than 1 hour. Similarly, you cannot insert data with a future timestamp. I complete the creation of the table. As you noticed, I was not asked for a data schema. Timestream will automatically infer that as data is ingested. Now, let’s put some data in the table! Loading Data in Amazon Timestream Each record in a Timestream table is a single data point in the time series and contains: The measure name, type, and value. Each record can contain a single measure, but different measure names and types can be stored in the same table. The timestamp of when the measure was collected, with nanosecond granularity. Zero or more dimensions that describe the measure and can be used to filter or aggregate data. Records in a table can have different dimensions. For example, let’s build a simple monitoring application collecting CPU, memory, swap, and disk usage from a server. Each server is identified by a hostname and has a location expressed as a country and a city. In this case, the dimensions would be the same for all records: country city hostname Records in the table are going to measure different things. The measure names I use are: cpu_utilization memory_utilization swap_utilization disk_utilization Measure type is DOUBLE for all of them. For the monitoring application, I am using Python. To collect monitoring information I use the psutil module that I can install with: pip3 install psutil Here’s the code for the collect.py application: import time import boto3 import psutil from botocore.config import Config DATABASE_NAME = "MyDatabase" TABLE_NAME = "MyTable" COUNTRY = "UK" CITY = "London" HOSTNAME = "MyHostname" # You can make it dynamic using socket.gethostname() INTERVAL = 1 # Seconds def prepare_record(measure_name, measure_value): record = { 'Time': str(current_time), 'Dimensions': dimensions, 'MeasureName': measure_name, 'MeasureValue': str(measure_value), 'MeasureValueType': 'DOUBLE' } return record def write_records(records): try: result = write_client.write_records(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME, Records=records, CommonAttributes={}) status = result['ResponseMetadata']['HTTPStatusCode'] print("Processed %d records. WriteRecords Status: %s" % (len(records), status)) except Exception as err: print("Error:", err) if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', config=Config( read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) query_client = session.client('timestream-query') dimensions = [ {'Name': 'country', 'Value': COUNTRY}, {'Name': 'city', 'Value': CITY}, {'Name': 'hostname', 'Value': HOSTNAME}, ] records = [] while True: current_time = int(time.time() * 1000) cpu_utilization = psutil.cpu_percent() memory_utilization = psutil.virtual_memory().percent swap_utilization = psutil.swap_memory().percent disk_utilization = psutil.disk_usage('/').percent records.append(prepare_record('cpu_utilization', cpu_utilization)) records.append(prepare_record( 'memory_utilization', memory_utilization)) records.append(prepare_record('swap_utilization', swap_utilization)) records.append(prepare_record('disk_utilization', disk_utilization)) print("records {} - cpu {} - memory {} - swap {} - disk {}".format( len(records), cpu_utilization, memory_utilization, swap_utilization, disk_utilization)) if len(records) == 100: write_records(records) records = [] time.sleep(INTERVAL) I start the collect.py application. Every 100 records, data is written in the MyData table: $ python3 collect.py records 4 - cpu 31.6 - memory 65.3 - swap 73.8 - disk 5.7 records 8 - cpu 18.3 - memory 64.9 - swap 73.8 - disk 5.7 records 12 - cpu 15.1 - memory 64.8 - swap 73.8 - disk 5.7 . . . records 96 - cpu 44.1 - memory 64.2 - swap 73.8 - disk 5.7 records 100 - cpu 46.8 - memory 64.1 - swap 73.8 - disk 5.7 Processed 100 records. WriteRecords Status: 200 records 4 - cpu 36.3 - memory 64.1 - swap 73.8 - disk 5.7 records 8 - cpu 31.7 - memory 64.1 - swap 73.8 - disk 5.7 records 12 - cpu 38.8 - memory 64.1 - swap 73.8 - disk 5.7 . . . Now, in the Timestream console, I see the schema of the MyData table, automatically updated based on the data ingested: Note that, since all measures in the table are of type DOUBLE, the measure_value::double column contains the value for all of them. If the measures were of different types (for example, INT or BIGINT) I would have more columns (such as measure_value::int and measure_value::bigint) . In the console, I can also see a recap of which kind measures I have in the table, their corresponding data type, and the dimensions used for that specific measure: Querying Data from the Console I can query time series data using SQL. The memory store is optimized for fast point-in-time queries, while the magnetic store is optimized for fast analytical queries. However, queries automatically process data on all stores (memory and magnetic) without having to specify the data location in the query. I am running queries straight from the console, but I can also use JDBC connectivity to access the query engine. I start with a basic query to see the most recent records in the table: SELECT * FROM MyDatabase.MyTable ORDER BY time DESC LIMIT 8 Let’s try something a little more complex. I want to see the average CPU utilization aggregated by hostname in 5 minutes intervals for the last two hours. I filter records based on the content of measure_name. I use the function bin() to round time to a multiple of an interval size, and the function ago() to compare timestamps: SELECT hostname, bin(time, 5m) as binned_time, avg(measure_value::double) as avg_cpu_utilization FROM MyDatabase.MyTable WHERE measure_name = 'cpu_utilization' AND time > ago(2h) GROUP BY hostname, bin(time, 5m) When collecting time series data you may miss some values. This is quite common especially for distributed architectures and IoT devices. Timestream has some interesting functions that you can use to fill in the missing values, for example using linear interpolation, or based on the last observation carried forward. More generally, Timestream offers many functions that help you to use mathematical expressions, manipulate strings, arrays, and date/time values, use regular expressions, and work with aggregations/windows. To experience what you can do with Timestream, you can create a sample database and add the two IoT and DevOps datasets that we provide. Then, in the console query interface, look at the sample queries to get a glimpse of some of the more advanced functionalities: Using Amazon Timestream with Grafana One of the most interesting aspects of Timestream is the integration with many platforms. For example, you can visualize your time series data and create alerts using Grafana 7.1 or higher. The Timestream plugin is part of the open source edition of Grafana. I add a new GrafanaDemo table to my database, and use another sample application to continuously ingest data. The application simulates performance data collected from a microservice architecture running on thousands of hosts. I install Grafana on an Amazon Elastic Compute Cloud (EC2) instance and add the Timestream plugin using the Grafana CLI. $ grafana-cli plugins install grafana-timestream-datasource I use SSH Port Forwarding to access the Grafana console from my laptop: $ ssh -L 3000:<EC2-Public-DNS>:3000 -N -f ec2-user@<EC2-Public-DNS> In the Grafana console, I configure the plugin with the right AWS credentials, and the Timestream database and table. Now, I can select the sample dashboard, distributed as part of the Timestream plugin, using data from the GrafanaDemo table where performance data is continuously collected: Available Now Amazon Timestream is available today in US East (N. Virginia), Europe (Ireland), US West (Oregon), and US East (Ohio). You can use Timestream with the console, the AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. With Timestream, you pay based on the number of writes, the data scanned by the queries, and the storage used. For more information, please see the pricing page. You can find more sample applications in this repo. To learn more, please see the documentation. It’s never been easier to work with time series, including data ingestion, retention, access, and storage tiering. Let me know what you are going to build! — Danilo
Read more
  • 0
  • 0
  • 3211

article-image-net-interactive-preview-3-vs-code-insiders-and-net-polyglot-notebooks-from-net-blog
Matthew Emerick
30 Sep 2020
6 min read
Save for later

.NET Interactive Preview 3: VS Code Insiders and .NET Polyglot Notebooks from .NET Blog

Matthew Emerick
30 Sep 2020
6 min read
In .NET Interactive Preview 2, we announced that in addition to Jupyter Notebook and Jupyter Lab, users could use nteract as well. In this preview, users can add VS Code Insiders to that list. With the VS Code Insiders experience, users can get started with .NET notebooks without needing to install Jupyter. The VS Code experience is still a work in progress, and is only available in VS Code Insiders. We look forward to your feedback. Getting started To get started with .NET notebooks, please install the following: The latest version of VS Code Insiders. The latest .NET Core 3.1 SDK. The .NET Interactive Notebooks extension. Creating a new .NET notebook Once you have the requirements listed above installed, you are ready to start creating .NET Notebooks in VS Code Insiders. To create a new notebook, open the Command Palette(Ctrl+Shift+P), and select Create new blank notebook. You can also create a new notebook with Ctrl+Shift+Alt+N key combination. Every notebook has a default language. A new blank notebook starts with a C# cell, as noted in the lower right corner of the cell. If you click on C# (.NET Interactive), you can change the language of the cell. If you change the language of the cell, the next cell you create will continue with that language. To add a cell, hover above or below an existing cell. Buttons appear allowing you to specify the type of cell to add, +Code or +Markdown. If you select +Code, you can change the language afterward. Opening an existing .NET notebook To open an existing .NET notebook, bring up the Command Palette and select Open notebook. Now, navigate to a local .ipynb file. With .NET notebooks in VS Code, you can take advantage of rich coding experiences like IntelliSense, and you can use all of your favorite VS Code extensions. Polyglot Notebooks: Variable Sharing .NET Interactive is a multi-language kernel that allows you to create notebooks that use different languages together. You switch languages from one cell to another, as appropriate to the task at hand. Pulling values into the notebook and moving values between languages are useful capabilities, which we’ve enabled with a pair of magic commands: #!share and #!value. #!share .NET Interactive provides subkernels for three languages (C#, F#, and PowerShell) within the same process. You can share variables between the .NET subkernels using the #!share magic command. Once a variable has been declared in one of these subkernels, it can be accessed from another. And because these kernels run in the same process, if the type of a variable is a reference type, changes to its state can be observed immediately from other kernels. Example: In this GIF, I’ve declared a C# variable csharpVariable in one cell, which I then share with F# using #!share --from csharp csharpVariable. #!value Importing text into a notebook, whether from the clipboard or a JSON or CSV file or a URL, is a fairly common scenario. The #!value magic command makes it easier to get text into your notebook without having to explicitly declare a string variable and worry about correctly escaping it. When you execute a cell using #!value, the content is stored in memory. (It will also be stored in your .ipynb output, and displayed, if you use the --mime-type switch.) So how do you access the value once it’s stored? The #!value magic command actually refers to another subkernel. In the GIF above, you can see it in the IntelliSense list that’s shown when #!share is typed. Once a value has been stored using #!value, you can share it with another subkernel just like you can from C#, F#, or PowerShell. There are a few ways to use #!value to store data in a notebook session. The example below shows you how to do it from the clipboard. For examples of other ways to use it, including reading from files and URLs, please check out Direct data entry with #!value. Sharing kernel values with JavaScript .NET Interactives has APIs available that simplify the process of directly writing HTML and JavaScript in the same notebook where you write .NET code. This enables you to create custom visualizations and access the broader ecosystem of JavaScript libraries without needing .NET wrappers. In the example below, we are sharing code from the .NET kernel using JavaScript and using it to render HTML, all in a single notebook. First, I build a collection of items in C# representing fruits with prices and quantities. Next, since .NET Interactive is polyglot, I can switch to JavaScript. (While the VS Code experience has a language chooser, you can also switch languages using magic commands like #!javascript, so that you can use these features in Jupyter as well). In the JavaScript cell, I load the D3.js visualization library and when it’s loaded, I access the fruits variable from the C# kernel using interactive.csharp.getVariable("fruits"). This interactive object has properties corresponding to each of the .NET Interactive subkernels. The variable from the subkernel is serialized into JSON and then deserialized into a JavaScript object (basket) that I’ll use to render my bar chart with D3.js. Final step: Let’s render the results. We are now going to use HTML to render our chart and JavaScript to call our function. The HTML has to be rendered first because that’s where the JavaScript will build the visualization. But we don’t have to put them in separate cells. Using magic commands, we can switch languages within the same cell so that the output renders at the bottom of the notebook. #!html <svg id = "fruit_display" width = "100%"></svg> #!js renderfruits("svg#fruit_display"); And there you have it! A simple demonstration on how you can leverage .NET Interactive polyglot notebooks. To learn more about variable sharing, sub-kernels, and HTML and JavaScript in .NET Interactive, please check out the linked documentation. Documentation We are also happy to share some progress on .NET Interactive documentation. You can now learn more about .NET Interactive’s architecture, variable sharing, visualization, and other features. The documentation is still a work progress, so we look forward to hearing from you. Resources Documentation Samples Source code .NET Interactive Notebooks Extension Happy interactive programming! The post .NET Interactive Preview 3: VS Code Insiders and .NET Polyglot Notebooks appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 1345

article-image-new-app-store-marketing-tools-available-from-news-apple-developer
Matthew Emerick
29 Sep 2020
1 min read
Save for later

New App Store marketing tools available from News - Apple Developer

Matthew Emerick
29 Sep 2020
1 min read
Take advantage of new marketing resources to promote your apps around the world. You can now generate short links or embeddable code that lead to your App Store product page and display your app icon, a QR code, or an App Store badge. Download localized App Store badges, your app icon, and more. Learn more
Read more
  • 0
  • 0
  • 1385
article-image-react-newsletter-232-from-ui-devs-rss-feed
Matthew Emerick
29 Sep 2020
3 min read
Save for later

React Newsletter #232 from ui.dev's RSS Feed

Matthew Emerick
29 Sep 2020
3 min read
News Airbnb releases visx 1.0 This is a collection of reusable, low-level visualization components that combine the powers of D3 and React. It has been used internally at Airbnb for the last 2.5 years and was publicly released last week. We wrote a lot about it in yesterday’s issue of Bytes, if you’d like a longer, more colorful breakdown. Dan Abramov tweets about the future of import React tl;dr, eventually (like, a long eventually), stop using import React from 'react' and start using import * as React from 'react'. Articles Introducing the new JSX Transform This article from the official React Blog describes how React 17 provides support for a new version of JSX Transform. It goes over exactly what that new version is and how to try it out. The Complete Guide to Next.js Authentication In this comprehensive guide, Nader Dabit will teach you how to implement authentication in a Next.js app. It covers client authentication, authenticated server-rendered pages, authenticated API routes, protected routes, and redirects. Understanding React rendering Rendering is the most important procedure that a programmer has to manage in frontend development. In React, the render() method is the only required method in a class component and is responsible for describing the view to be rendered to the browser window. This article will help you understand the subtleties of how this method works. Tutorials Building a Material UI Dashboard with React In this tutorial, you’ll learn how to build a full-stack dashboard with KPIs, charts, and a data table. It takes you from data in the database to the interactive, filterable, and searchable admin dashboard. Building Multistep Forms with MaterialUI and React Hooks This tutorial walks you through how to use the React useState hook and the MaterialUI component library to build a multistep medical history collection form. Sponsor React developers are in demand on Vettery Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today. Get started today. Projects Vime An open-source media player that’s customizable, extensible, and framework-agnostic. It also comes with bindings for React and other frameworks. headlessui-react A set of completely unstyled, fully accessible UI components for React, designed to integrate beautifully with Tailwind CSS. Created by Tailwind Labs. reactivue Use the Vue Composition API in React components. Videos I built a chat app in 7 minutes with React & Firebase This video from Fireship demonstrates how quickly React and Firebase can help you to build simple apps like this chat app. Jeff is also maybe showing off just a little bit here, but we’re ok with that too.
Read more
  • 0
  • 0
  • 3002

article-image-join-us-power-bi-webinar-wednesday-30-september-2020-800-am-900-am-pdt-from-microsoft-power-bi-blog-microsoft-power-bi
Matthew Emerick
28 Sep 2020
1 min read
Save for later

Join us! Power BI Webinar. Wednesday 30 September 2020 &#8211; 8:00 AM &#8211; 9:00 AM PDT from Microsoft Power BI Blog | Microsoft Power BI

Matthew Emerick
28 Sep 2020
1 min read
This webinar shows us shortcuts to help you unlock your new superpower on your usual context and save a lot of time when working with Power BI together with Excel. 
Read more
  • 0
  • 0
  • 724

article-image-react-in-the-streets-d3-in-the-sheets-from-ui-devs-rss-feed
Matthew Emerick
28 Sep 2020
6 min read
Save for later

React in the streets, D3 in the sheets from ui.dev's RSS Feed

Matthew Emerick
28 Sep 2020
6 min read
Got a real spicy one for you today. Airbnb releases visx, Elder.js is a new Svelte framework, and CGId buttholes. Airbnb releases visx 1.0 Visualizing a future where we can stay in Airbnbs again Tony Robbins taught us to visualize success… Last week, Airbnb officially released visx 1.0, a collection of reusable, low-level visualization components that combine the powers of React and D3 (the JavaScript library, not, sadly, the Mighty Ducks). Airbnb has been using visx internally for over two years to “unify our visualization stack across the company.” By “visualization stack”, they definitely mean “all of our random charts and graphs”, but that shouldn’t stop you from listing yourself as a full-stack visualization engineer once you’ve played around with visx a few times. Why tho? The main sales pitch for visx is that it’s low-level enough to be powerful but high-level enough to, well, be useful. The way it does this is by leveraging React for the UI layer and D3 for the under the hood mathy stuff. Said differently - React in the streets, D3 in the sheets. The library itself features 30 separate packages of React visualization primitives and offers 3 main advantages compared to other visualization libraries: Smaller bundles: because visx is split into multiple packages. BYOL: Like your boyfriend around Dinner time, visx is intentionally unopinionated. Use any animation library, state management solution, or CSS-in-JS tools you want - visx DGAF. Not a charting library: visx wants to teach you how to fish, not catch a fish for you. It’s designed to be built on top of. The bottom line Are data visualizations the hottest thing to work on? Not really. But are they important? Also not really. But the marketing and biz dev people at your company love them. And those nerds esteemed colleagues will probably force you to help them create some visualizations in the future (if they haven’t already). visx seems like a great way to help you pump those out faster, easier, and with greater design consistency. Plus, there’s always a chance that the product you’re working on could require some sort of visualizations (i.e. tooltips, gradients, patterns, etc.). visx can help with that too. So, thanks Airbnb. No word yet on if Airbnb is planning on charging their usual 3% service fee on top of an overinflated cleaning fee for usage of visx, but we’ll keep you update. Elder.js 1.0 - a new Svelte framework Whoever Svelte it, dealt it dear Respect your generators… Elder.js 1.0 is a new, opinionated Svelte framework and static site generator that is very SEO-focused. It was created by Nick Reese in order to try and solve some of the unique challenges that come with building flagship SEO sites with 10-100k+ pages, like his site elderguide.com. Quick Svelte review: Svelte is a JavaScript library way of life that was first released ~4 years ago. It compiles all your code to vanilla JS at build time with less dependencies and no virtual DOM, and its syntax is known for being pretty simple and readable. So, what’s unique about Elder.js? Partial hydration: It hydrates just the parts of the client that need to be interactive, allowing you to significantly reduce your payloads and still maintain full control over component lazy-loading, preloading, and eager-loading. Hooks, shortcodes, and plugins: You can customize the framework with hooks that are designed to be “modular, sharable, and easily bundled in to Elder.js plugins for common use cases.” Straightforward data flow: Associating a data function in your route.js gives you complete control over how you fetch, prepare, and manipulate data before sending it to your Svelte template. These features (plus its tiny bundle sizes) should make a Elder.js a faster and simpler alternative to Sapper (the default Svelte framework) for a lot of use cases. Sapper is still probably the way to go if you’re building a full-fledged Svelte app, but Elder.js seems pretty awesome for content-heavy Svelte sites. The bottom line We’re super interested in who will lead Elder.js’s $10 million seed round. That’s how this works, right? JS Quiz - Answer Below Why does this code work? const friends = ['Alex', 'AB', 'Mikenzi'] friends.hasOwnProperty('push') // false Specifically, why does friends.hasOwnProperty('push') work even though friends doesn’t have a hasOwnProperty property and neither does Array.prototype? Cool bits Vime is an open-source media player that’s customizable, extensible, and framework-agnostic. The React Core Team wrote about the new JSX transform in React 17. Speaking of React 17, Happy 3rd Birthday React 16 😅. Nathan wrote an in-depth post about how to understand React rendering. Urlcat is a cool JavaScript library that helps you build URLs faster and avoid common mistakes. Is it cooler than the live-action CATS movie tho? Only one of those has CGId buttholes, so you tell us. (My search history is getting real weird writing this newsletter.) Everyone’s favorite Googler Addy Osmani wrote about visualizing data structures using the VSCode Debug Visualizer. Smolpxl is a JavaScript library for creating retro, pixelated games. There’s some strong Miniclip-in-2006 vibes in this one. Lea Verou wrote a great article about the failed promise of Web Components. “The failed promise” sounds like a mix between a T-Swift song and one of my middle school journal entries, but sometimes web development requires strong language, ok? Billboard.js released v2.1 because I guess this is now Chart Library Week 2020. JS Quiz - Answer const friends = ['Alex', 'AB', 'Mikenzi'] friends.hasOwnProperty('push') // false As mentioned earlier, if you look at Array.prototype, it doesn’t have a hasOwnProperty method. How then, does the friends array have access to hasOwnProperty? The reason is because the Array class extends the Object class. So when the JavaScript interpreter sees that friends doesn’t have a hasOwnProperty property, it checks if Array.prototype does. When Array.prototype doesn’t, it checks if Object.prototype does, it does, then it invokes it. const friends = ['Alex', 'AB', 'Mikenzi'] console.log(Object.prototype) /* constructor: ƒ Object() hasOwnProperty: ƒ hasOwnProperty() isPrototypeOf: ƒ isPrototypeOf() propertyIsEnumerable: ƒ propertyIsEnumerable() toLocaleString: ƒ toLocaleString() toString: ƒ toString() valueOf: ƒ valueOf() */ friends instanceof Array // true friends instanceof Object // true friends.hasOwnProperty('push') // false
Read more
  • 0
  • 0
  • 3178
article-image-data-insights-for-tracking-the-worlds-most-watched-election-from-whats-new
Anonymous
24 Sep 2020
4 min read
Save for later

Data &amp; insights for tracking the world’s most watched election from What's New

Anonymous
24 Sep 2020
4 min read
Data is how we understand elections. In the leadup to major political events, we all become data people: we track the latest candidate polling numbers, evaluate policy proposals, and look for statistics that could explain how the electorate might swing this year. This has never been more true than during this year’s U.S Presidential election. Quality data and clear, compelling visualizations are critical to understanding the polls, and how the sentiment among voters could influence the outcome of the race. Today, Tableau is announcing a new partnership with SurveyMonkey and Axios to bring exclusive public opinion research to life through rich visual analytics. Powered by SurveyMonkey’s vast polling infrastructure, Tableau’s world-class data visualization tools, and Axios’ incisive storytelling, this resource will enable anyone to delve into of-the-moment data and make discoveries. In the leadup to the election, SurveyMonkey will poll a randomly selected subset of the more than 2 million people who take a survey on their platform every day, asking a wide range of questions, from election integrity to COVID-19 concerns to how people will vote. “It’s never been clearer that what people think matters,” says Jon Cohen, chief research officer at SurveyMonkey. “With people around the world tuned into the U.S. presidential election, we’re showcasing how American voters are processing their choices, dealing with ongoing devastation from the pandemic, managing environmental crises, and confronting fresh challenges in their daily lives.” The results from SurveyMonkey’s ongoing polls will be published in interactive Tableau dashboards, where anyone will be able to filter and drill down into the data to explore how everything from demographics to geography to political affiliation play into people’s opinions. “To understand public views, we need to go beyond the topline numbers that dominate the conversation,” Cohen adds. “The critical debate over race and racial disparities and deeply partisan reaction to the country’s coronavirus response both point to the need to understand how different groups perceive what’s happening and what to do about it.” As a platform purpose-built for helping people to peel back layers of complex datasets and gain insights, Tableau provides visitors a compelling avenue into better understanding this year’s pre-election landscape. “People need reliable, well designed data visualizations that are easy to understand and can provide key insights,” says Andy Cotgreave, Tableau’s Director of Technical Evangelism. “As Americans make their decisions ahead of the election, they need charts that are optimized for communication. Tableau makes it possible to quickly and easily build the right chart for any data, and enable people to understand the data for themselves.” Alongside the dashboards, Tableau and SurveyMonkey experts will contribute pointers on visualization best practices for elections, and resources to enable anyone to better work with survey data. And Axios, as the exclusive media partner for this project, will incorporate the data and visualizations into their ongoing analysis of the political landscape around the election. At Tableau, we believe that data is a critical tool for understanding complex issues like political elections. We also believe that where data comes from—and how it's understood in context—is essential. Through our partnership with SurveyMonkey and Axios, we aim to provide visitors with an end-to-end experience of polling data, from understanding how SurveyMonkey’s polling infrastructure produces robust datasets, to seeing them visualized in Tableau, to watching them inform political commentary through Axios. Data doesn’t just answer questions—it prompts exploration and discussion. It helps us understand the complex issues shaping our jobs, families, and communities. Helping people see and understand survey data can bring clarity to important issues leading up to the election and lets people dig deeper to answer their own specific questions.
Read more
  • 0
  • 0
  • 577

article-image-see-all-the-power-bi-updates-at-the-microsoft-business-applications-launch-event-from-microsoft-power-bi-blog-microsoft-power-bi
Matthew Emerick
24 Sep 2020
1 min read
Save for later

See all the Power BI updates at the Microsoft Business Applications Launch Event from Microsoft Power BI Blog | Microsoft Power BI

Matthew Emerick
24 Sep 2020
1 min read
We’re excited to share all the new innovations we’re rolling out for Power BI to help make creating professional-grade apps even easier. Join us on October 1, 2020, from 9–11 AM Pacific Time (UTC -7), for this free digital event.
Read more
  • 0
  • 0
  • 758