Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Uncategorized

28 Articles
article-image-now-available-in-tableau-2020-4-prep-builder-in-the-browser-multiple-map-layers-resource-monitoring-tool-for-linux-and-more-from-whats-new
Anonymous
17 Nov 2020
6 min read
Save for later

Now available in Tableau 2020.4: Prep Builder in the browser, multiple map layers, Resource Monitoring Tool for Linux, and more from What's New

Anonymous
17 Nov 2020
6 min read
Emily Chen Product Marketing Specialist, Tableau Spencer Czapiewski November 17, 2020 - 12:48am December 15, 2020 The newest release of Tableau is here! Tableau 2020.4 brings practical enhancements to make analytics in your organization more seamless and scalable. Upgrade to take advantage of these new innovations and check out our playlist of Tableau Conference-ish highlights to hear more about what we’ve got coming next year.  Let’s recap some of the exciting features in the Tableau 2020.4 release: Prep your data all in one integrated platform on the web with Tableau Prep Builder in the browser. Enjoy multiple enhancements to bring your geospatial analysis to the next level, including multiple marks layers support for maps, Redshift spatial support, and more. Explore next-level analysis with two new predictive functions models. Proactively monitor and troubleshoot server health with the Resource Monitoring Tool, now available for Linux deployments. Now we’ll take a deeper look at some of the biggest new features in this release. Prep your data wherever you have access to a browser During Tableau Conference-ish we were thrilled to announce that the full functionality of Tableau Prep Builder was coming to the browser—and now it’s here in Tableau 2020.4! With more people needing access to data than ever before, Tableau Prep empowers everyone in an organization to easily prepare their data, now all in one convenient and integrated platform on the web. For analysts, you can now create and edit data prep flows all within web authoring—all you need is access to a browser. Say goodbye to context switching between the desktop to create your flows, and then Server or Online to publish and share them. For IT folks, this means that Prep Builder can now be centrally managed on your server, simplifying deployment, license management, and version control. Without the need to manage individual desktops, IT admins can now upgrade their server to get everyone in the organization on the latest version in one go. And since data prep flows are stored on the server, IT teams get more visibility into what is being created, and can better manage data sources and standardize on repetitive flows. Scaling up Tableau Prep in your organization as your needs grow is now easier than ever. IT admins can enable web authoring for Prep in the server or site settings. Once enabled, users will be able to create a new flow from the Start page, Explore page, or Data Source page by clicking on the “New” button and selecting “Flow” from the dropdown. We can’t wait for you all to start preparing your data in a way that fits your workflow. Unlock next-level web analytics with significant web authoring enhancements To continue our journey to the web, we are also happy to announce multiple enhancements to our web authoring experience, many of which we know will help unlock analytics in your business. Starting in 2020.4, extract creation, highlight actions, filters, fixed sets, and a Salesforce connector are all available in web authoring. Stay tuned for more details in a future blog post. Less is more? Not this time—level up your geospatial analysis with multiple marks layers support for maps, and more You asked, and asked some more, and today we are excited to release a fan-favorite feature that we announced at TC-ish—multiple marks layers support for maps is here! You can now add unlimited marks layers from a single data source to your map visualizations. We know that two sets of marks via a dual axis wasn’t always enough to bring together and analyze all of your location data. With this feature, bringing multiple spatial layers and contexts together for analysis in Tableau is not just possible, but simple—and with unlimited layers, the sky’s the limit. Adding multiple layers of marks is easy. Once you’ve created a map, simply drag a geographic field onto the “Add a Marks Layer” drop target that appears in the upper left corner of the map canvas—and that’s it! In the example below, you can see that we can now visualize DC Police Sectors, Police Department buffers, and 311 calls, all in one layered map view. But that’s not all. This release also brings a collection of significant spatial improvements. We’re expanding Tableau’s spatial database connections to make solving location-based questions easier than ever. You can now connect directly to tables in Redshift that contain spatial data, and instantly visualize that data in Tableau. We’re also introducing offline maps support for Tableau Server, ensuring that maps remain accessible to all users—especially helpful in organizations with strict internet access requirements. And finally, spatial support for Tableau Prep has arrived! Prep Builder can now import, recognize, and export spatial data to extracts and published data sources. Power up your predictions with new predictive modeling enhancements We introduced predictive modeling functions in 2020.3, and we’re continuing to build on this functionality to ensure that you have the power, simplicity, and flexibility you need to apply these functions to a wide variety of use cases. With Tableau 2020.4, you can now select from two additional models in predictive modeling functions—regularized linear regression and Gaussian process regression—in addition to the default model of linear regression. You’ll also be able to extend your date range, and therefore your predictions, with just a few clicks using a simple menu. In the example below, we want to see what kind of sales numbers we can expect in the following months. Setting this up is as simple as clicking the Date pill, selecting “Show Future Values,” and using the menu options to set how far into the future you want to generate predictions. Resource Monitoring Tool for Tableau Server arrives on Linux Lastly, we are happy to announce that the Tableau Resource Monitoring Tool, previously available for Windows only, is now available on Linux deployments as part of Tableau Server Management. Proactively monitor and troubleshoot server health with improved visibility into your hardware and software performance to get the most out of your deployment. Thank you, Tableau Community! You are at the heart of everything we do and the Tableau 2020.4 release is no different. We can’t do this without you, so thank you for your continued feedback and inspiration. Check out the Ideas forum to see all of the features that have been incorporated as a result of your brilliant ideas, and get the newest version of Tableau today.
Read more
  • 0
  • 0
  • 918

article-image-has-the-time-come-for-it-predictive-analytics-from-devops-com
Matthew Emerick
15 Oct 2020
1 min read
Save for later

Has the Time Come for IT Predictive Analytics? from DevOps.com

Matthew Emerick
15 Oct 2020
1 min read
Predictive analytics technologies have become critical to compete in manufacturing (predicting machine failure), banking (predicting fraud), e-commerce (predicting buying behavior) as well as to address horizontal use cases such as cybersecurity breach prevention and sales forecasting. Using data to predict and prevent IT outages and issues is also a growing best practice—especially as advances in […] The post Has the Time Come for IT Predictive Analytics? appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 943

article-image-store-and-access-time-series-data-at-any-scale-with-amazon-timestream-now-generally-available-from-aws-news-blog
Matthew Emerick
01 Oct 2020
10 min read
Save for later

Store and Access Time Series Data at Any Scale with Amazon Timestream – Now Generally Available from AWS News Blog

Matthew Emerick
01 Oct 2020
10 min read
Time series are a very common data format that describes how things change over time. Some of the most common sources are industrial machines and IoT devices, IT infrastructure stacks (such as hardware, software, and networking components), and applications that share their results over time. Managing time series data efficiently is not easy because the data model doesn’t fit general-purpose databases. For this reason, I am happy to share that Amazon Timestream is now generally available. Timestream is a fast, scalable, and serverless time series database service that makes it easy to collect, store, and process trillions of time series events per day up to 1,000 times faster and at as little as to 1/10th the cost of a relational database. This is made possible by the way Timestream is managing data: recent data is kept in memory and historical data is moved to cost-optimized storage based on a retention policy you define. All data is always automatically replicated across multiple availability zones (AZ) in the same AWS region. New data is written to the memory store, where data is replicated across three AZs before returning success of the operation. Data replication is quorum based such that the loss of nodes, or an entire AZ, does not disrupt durability or availability. In addition, data in the memory store is continuously backed up to Amazon Simple Storage Service (S3) as an extra precaution. Queries automatically access and combine recent and historical data across tiers without the need to specify the storage location, and support time series-specific functionalities to help you identify trends and patterns in data in near real time. There are no upfront costs, you pay only for the data you write, store, or query. Based on the load, Timestream automatically scales up or down to adjust capacity, without the need to manage the underlying infrastructure. Timestream integrates with popular services for data collection, visualization, and machine learning, making it easy to use with existing and new applications. For example, you can ingest data directly from AWS IoT Core, Amazon Kinesis Data Analytics for Apache Flink, and Amazon MSK. You can visualize data stored in Timestream from Amazon QuickSight, and use Amazon SageMaker to apply machine learning algorithms to time series data, for example for anomaly detection. You can use Timestream fine-grained AWS Identity and Access Management (IAM) permissions to easily ingest or query data from an AWS Lambda function. We are providing the tools to use Timestream with open source platforms such as Apache Kafka, Telegraf, Prometheus, and Grafana. Using Amazon Timestream from the Console In the Timestream console, I select Create database. I can choose to create a Standard database or a Sample database populated with sample data. I proceed with a standard database and I name it MyDatabase. All Timestream data is encrypted by default. I use the default master key, but you can use a customer managed key that you created using AWS Key Management Service (KMS). In that way, you can control the rotation of the master key, and who has permissions to use or manage it. I complete the creation of the database. Now my database is empty. I select Create table and name it MyTable. Each table has its own data retention policy. First data is ingested in the memory store, where it can be stored from a minimum of one hour to a maximum of a year. After that, it is automatically moved to the magnetic store, where it can be kept up from a minimum of one day to a maximum of 200 years, after which it is deleted. In my case, I select 1 hour of memory store retention and 5 years of magnetic store retention. When writing data in Timestream, you cannot insert data that is older than the retention period of the memory store. For example, in my case I will not be able to insert records older than 1 hour. Similarly, you cannot insert data with a future timestamp. I complete the creation of the table. As you noticed, I was not asked for a data schema. Timestream will automatically infer that as data is ingested. Now, let’s put some data in the table! Loading Data in Amazon Timestream Each record in a Timestream table is a single data point in the time series and contains: The measure name, type, and value. Each record can contain a single measure, but different measure names and types can be stored in the same table. The timestamp of when the measure was collected, with nanosecond granularity. Zero or more dimensions that describe the measure and can be used to filter or aggregate data. Records in a table can have different dimensions. For example, let’s build a simple monitoring application collecting CPU, memory, swap, and disk usage from a server. Each server is identified by a hostname and has a location expressed as a country and a city. In this case, the dimensions would be the same for all records: country city hostname Records in the table are going to measure different things. The measure names I use are: cpu_utilization memory_utilization swap_utilization disk_utilization Measure type is DOUBLE for all of them. For the monitoring application, I am using Python. To collect monitoring information I use the psutil module that I can install with: pip3 install psutil Here’s the code for the collect.py application: import time import boto3 import psutil from botocore.config import Config DATABASE_NAME = "MyDatabase" TABLE_NAME = "MyTable" COUNTRY = "UK" CITY = "London" HOSTNAME = "MyHostname" # You can make it dynamic using socket.gethostname() INTERVAL = 1 # Seconds def prepare_record(measure_name, measure_value): record = { 'Time': str(current_time), 'Dimensions': dimensions, 'MeasureName': measure_name, 'MeasureValue': str(measure_value), 'MeasureValueType': 'DOUBLE' } return record def write_records(records): try: result = write_client.write_records(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME, Records=records, CommonAttributes={}) status = result['ResponseMetadata']['HTTPStatusCode'] print("Processed %d records. WriteRecords Status: %s" % (len(records), status)) except Exception as err: print("Error:", err) if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', config=Config( read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) query_client = session.client('timestream-query') dimensions = [ {'Name': 'country', 'Value': COUNTRY}, {'Name': 'city', 'Value': CITY}, {'Name': 'hostname', 'Value': HOSTNAME}, ] records = [] while True: current_time = int(time.time() * 1000) cpu_utilization = psutil.cpu_percent() memory_utilization = psutil.virtual_memory().percent swap_utilization = psutil.swap_memory().percent disk_utilization = psutil.disk_usage('/').percent records.append(prepare_record('cpu_utilization', cpu_utilization)) records.append(prepare_record( 'memory_utilization', memory_utilization)) records.append(prepare_record('swap_utilization', swap_utilization)) records.append(prepare_record('disk_utilization', disk_utilization)) print("records {} - cpu {} - memory {} - swap {} - disk {}".format( len(records), cpu_utilization, memory_utilization, swap_utilization, disk_utilization)) if len(records) == 100: write_records(records) records = [] time.sleep(INTERVAL) I start the collect.py application. Every 100 records, data is written in the MyData table: $ python3 collect.py records 4 - cpu 31.6 - memory 65.3 - swap 73.8 - disk 5.7 records 8 - cpu 18.3 - memory 64.9 - swap 73.8 - disk 5.7 records 12 - cpu 15.1 - memory 64.8 - swap 73.8 - disk 5.7 . . . records 96 - cpu 44.1 - memory 64.2 - swap 73.8 - disk 5.7 records 100 - cpu 46.8 - memory 64.1 - swap 73.8 - disk 5.7 Processed 100 records. WriteRecords Status: 200 records 4 - cpu 36.3 - memory 64.1 - swap 73.8 - disk 5.7 records 8 - cpu 31.7 - memory 64.1 - swap 73.8 - disk 5.7 records 12 - cpu 38.8 - memory 64.1 - swap 73.8 - disk 5.7 . . . Now, in the Timestream console, I see the schema of the MyData table, automatically updated based on the data ingested: Note that, since all measures in the table are of type DOUBLE, the measure_value::double column contains the value for all of them. If the measures were of different types (for example, INT or BIGINT) I would have more columns (such as measure_value::int and measure_value::bigint) . In the console, I can also see a recap of which kind measures I have in the table, their corresponding data type, and the dimensions used for that specific measure: Querying Data from the Console I can query time series data using SQL. The memory store is optimized for fast point-in-time queries, while the magnetic store is optimized for fast analytical queries. However, queries automatically process data on all stores (memory and magnetic) without having to specify the data location in the query. I am running queries straight from the console, but I can also use JDBC connectivity to access the query engine. I start with a basic query to see the most recent records in the table: SELECT * FROM MyDatabase.MyTable ORDER BY time DESC LIMIT 8 Let’s try something a little more complex. I want to see the average CPU utilization aggregated by hostname in 5 minutes intervals for the last two hours. I filter records based on the content of measure_name. I use the function bin() to round time to a multiple of an interval size, and the function ago() to compare timestamps: SELECT hostname, bin(time, 5m) as binned_time, avg(measure_value::double) as avg_cpu_utilization FROM MyDatabase.MyTable WHERE measure_name = 'cpu_utilization' AND time > ago(2h) GROUP BY hostname, bin(time, 5m) When collecting time series data you may miss some values. This is quite common especially for distributed architectures and IoT devices. Timestream has some interesting functions that you can use to fill in the missing values, for example using linear interpolation, or based on the last observation carried forward. More generally, Timestream offers many functions that help you to use mathematical expressions, manipulate strings, arrays, and date/time values, use regular expressions, and work with aggregations/windows. To experience what you can do with Timestream, you can create a sample database and add the two IoT and DevOps datasets that we provide. Then, in the console query interface, look at the sample queries to get a glimpse of some of the more advanced functionalities: Using Amazon Timestream with Grafana One of the most interesting aspects of Timestream is the integration with many platforms. For example, you can visualize your time series data and create alerts using Grafana 7.1 or higher. The Timestream plugin is part of the open source edition of Grafana. I add a new GrafanaDemo table to my database, and use another sample application to continuously ingest data. The application simulates performance data collected from a microservice architecture running on thousands of hosts. I install Grafana on an Amazon Elastic Compute Cloud (EC2) instance and add the Timestream plugin using the Grafana CLI. $ grafana-cli plugins install grafana-timestream-datasource I use SSH Port Forwarding to access the Grafana console from my laptop: $ ssh -L 3000:<EC2-Public-DNS>:3000 -N -f ec2-user@<EC2-Public-DNS> In the Grafana console, I configure the plugin with the right AWS credentials, and the Timestream database and table. Now, I can select the sample dashboard, distributed as part of the Timestream plugin, using data from the GrafanaDemo table where performance data is continuously collected: Available Now Amazon Timestream is available today in US East (N. Virginia), Europe (Ireland), US West (Oregon), and US East (Ohio). You can use Timestream with the console, the AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. With Timestream, you pay based on the number of writes, the data scanned by the queries, and the storage used. For more information, please see the pricing page. You can find more sample applications in this repo. To learn more, please see the documentation. It’s never been easier to work with time series, including data ingestion, retention, access, and storage tiering. Let me know what you are going to build! — Danilo
Read more
  • 0
  • 0
  • 3201

article-image-jdk-15-the-new-features-in-java-15-from-infoworld-java
Matthew Emerick
15 Sep 2020
1 min read
Save for later

JDK 15: The new features in Java 15 from InfoWorld Java

Matthew Emerick
15 Sep 2020
1 min read
Java Development Kit 15, Oracle’s implementation of the next version of Java SE (Standard Edition), becomes available as a production release today, September 15, 2020. Highlights of JDK 15 include text blocks, hidden classes, a foreign-memory access API, the Z Garbage Collector, and previews of sealed classes, pattern matching, and records. JDK 15 is just a short-term release, only to be supported with Oracle Premier Support for six months until JDK 16 arrives next March. JDK 17, the next Long-Term Support release, to be supported by Oracle for eight years, is slated to arrive one year from now, as per Oracle’s six-month release cadence for Java SE versions. To read this article in full, please click here
Read more
  • 0
  • 0
  • 975

article-image-python-3-5-10-is-now-available-from-python-insider
Matthew Emerick
05 Sep 2020
1 min read
Save for later

Python 3.5.10 is now available from Python Insider

Matthew Emerick
05 Sep 2020
1 min read
 Python 3.5.10 is now available.  You can get it here.
Read more
  • 0
  • 0
  • 901

article-image-kali-linux-2020-2-release-from-kali-linux
Matthew Emerick
12 May 2020
9 min read
Save for later

Kali Linux 2020.2 Release from Kali Linux

Matthew Emerick
12 May 2020
9 min read
Despite the turmoil in the world, we are thrilled to be bringing you an awesome update with Kali Linux 2020.2! And it is available for immediate download. A quick overview of what’s new since January: KDE Plasma Makeover & Login PowerShell by Default. Kind of. Kali on ARM Improvements Lessons From The Installer Changes New Key Packages & Icons Behind the Scenes, Infrastructure Improvements KDE Plasma Makeover & Login With XFCE and GNOME having had a Kali Linux look and feel update, it’s time to go back to our roots (days of backtrack-linux) and give some love and attention to KDE Plasma. Introducing our dark and light themes for KDE Plasma: On the subject of theming, we have also tweaked the login screen (lightdm). It looks different, both graphically and the layout (the login boxes are aligned now)! PowerShell by Default. Kind of. A while ago, we put PowerShell into Kali Linux’s network repository. This means if you wanted powershell, you had to install the package as a one off by doing: kali@kali:~$ sudo apt install -y powershell We now have put PowerShell into one of our (primary) metapackages, kali-linux-large. This means, if you choose to install this metapackage during system setup, or once Kali is up and running (sudo apt install -y kali-linux-large), if PowerShell is compatible with your architecture, you can just jump straight into it (pwsh)! PowerShell isn’t in the default metapackage (that’s kali-linux-default), but it is in the one that includes the default and many extras, and can be included during system setup. Kali on ARM Improvements With Kali Linux 2020.1, desktop images no longer used “root/toor” as the default credentials to login, but had moved to “kali/kali”. Our ARM images are now the same. We are no longer using the super user account to login with. We also warned back in 2019.4 that we would be moving away from a 8GB minimum SD card, and we are finally ready to pull the trigger on this. The requirement is now 16GB or larger. One last note on the subject of ARM devices, we are not installing locales-all any more, so we highly recommend that you set your locale. This can be done by running the following command, sudo dpkg-reconfigure locales, then log out and back in. Lessons From Installer Changes With Kali Linux 2020.1 we announced our new style of images, “installer” & “live”. Issue It was intended that both “installer” & “live” could be customized during setup, to select which metapackage and desktop environment to use. When we did that, we couldn’t include metapackages beyond default in those images, as it would create too large of an ISO. As the packages were not in the image, if you selected anything other than the default options it would require network access to obtain the missing packages beyond default. After release, we noticed some users selecting “everything” and then waiting hours for installs to happen. They couldn’t understand why the installs where taking so long. We also have used different software on the back end to generate these images, and a few bugs slipped through the cracks (which explains the 2020.1a and 2020.1b releases). Solutions We have removed kali-linux-everything as an install time option (which is every package in the Kali Linux repository) in the installer image, as you can imagine that would have taken a long time to download and wait for during install We have cached kali-linux-large & every desktop environment into the install image (which is why its a little larger than previous to download) – allowing for a COMPLETE offline network install We have removed customization for “live” images – the installer switched back to copying the content of the live filesystem allowing again full offline install but forcing usage of our default XFCE desktop Summary If you are wanting to run Kali from a live image (DVD or USB stick), please use “live” If you are wanting anything else, please use “installer” If you are wanting anything other than XFCE as your desktop environment, please use “installer” If you are not sure, get “installer” Also, please keep in mind on an actual assessment “more” is not always “better”. There are very few reasons to install kali-linux-everything, and many reasons not too. To those of you that were selecting this option, we highly suggest you take some time and educate yourself on Kali before using it. Kali, or any other pentest distribution, is not a “turn key auto hack” solution. You still need to learn your platform, learn your tools, and educate yourself in general. Consider what you are really telling Kali to do when you are installing kali-linux-everything. Its similar to if you went into your phones app store and said “install everything!”. Thats likely not to have good results. We provide a lot of powerful tools and options in Kali, and while we may have a reputation of “Providing machine guns to monkeys”, but we actually expect you to know what you are doing. Kali is not going to hold your hand. It expects you to do the work of learning and Kali will be unforgiving if you don’t. New Key Packages & Icons Just like every Kali Linux release, we include the latest packages possible. Key ones to point out this release are: GNOME 3.36 – a few of you may have noticed a bug that slipped in during the first 12 hours of the update being available. We’re sorry about this, and have measures in place for it to not happen again Joplin – we are planning on replacing CherryTree with this in Kali Linux 2020.3! Nextnet Python 3.8 SpiderFoot For the time being, as a temporary measure due to certain tools needing it, we have re-included python2-pip. Python 2 has now reached “End Of Life” and is no longer getting updated. Tool makers, please, please, please port to Python 3. Users of tools, if you notice that a tool is not Python 3 yet, you can help too! It is not going to be around forever. Whilst talking about packages, we have also started to refresh our package logos for each tool. You’ll notice them in the Kali Linux menu, as well as the tools page on GitLab(more information on this coming soon!) If your tool has a logo and we have missed it, please let us know on the bug tracker. WSLconf WSLconf happened earlier this year, and @steev gave a 35 minute talk on “How We Use WSL at Kali“. Go check it out! Behind the Scenes, Infrastructure Improvements We have been celebrating the arrival of new servers, which over the last few weeks we have been migrating too. This includes a new ARM build server and what we use for package testing. This may not be directly noticeable, but you may reap the benefits of it! If you are wanting to help out with Kali, we have added a new section to our documentation showing how to submit a autopkgtest. Feedback is welcome! Kali Linux NetHunter We were so excited about some of the work that has been happening with NetHunter recently, we already did a mid-term release to showcase them and get it to you as quick as possible. On top of all the previous NetHunter news there is even more to announce this time around! Nexmon support has been revived, bringing WiFi monitor support and frame injection to wlan0 on the Nexus 6P, Nexus 5, Sony Xperia Z5 Compact, and more! OpenPlus 3T images have been added to the download page. We have crossed 160 different kernels in our repository, allowing NetHunter to support over 64 devices! Yes, over 160 kernels and over 64 devices supported. Amazing. Our documentation page has received a well deserved refresh, especially the kernel development section. One of the most common questions to come in about NetHunter is “What device should I run it on?”. Keep your eye on this page to see what your options are on an automatically updated basis! When you think about the amount of power NetHunter provides in such a compact package, it really is mind blowing. Its been amazing to watch this progress, and the entire Kali team is excited to show you what is coming in the future. Download Kali Linux 2020.2 Fresh images So what are you waiting for? Start downloading already! Seasoned Kali users are already aware of this, but for the ones who are not, we do also produce weekly builds that you can use as well. If you can’t wait for our next release and you want the latest packages when you download the image, you can just use the weekly image instead. This way you’ll have fewer updates to do. Just know these are automated builds that we don’t QA like we do our standard release images. Existing Upgrades If you already have an existing Kali installation, remember you can always do a quick update: kali@kali:~$ echo "deb http://http.kali.org/kali kali-rolling main non-free contrib" | sudo tee /etc/apt/sources.list kali@kali:~$ kali@kali:~$ sudo apt update && sudo apt -y full-upgrade kali@kali:~$ kali@kali:~$ [ -f /var/run/reboot-required ] && sudo reboot -f kali@kali:~$ You should now be on Kali Linux 2020.2. We can do a quick check by doing: kali@kali:~$ grep VERSION /etc/os-release VERSION="2020.2" VERSION_ID="2020.2" VERSION_CODENAME="kali-rolling" kali@kali:~$ kali@kali:~$ uname -v #1 SMP Debian 5.5.17-1kali1 (2020-04-21) kali@kali:~$ kali@kali:~$ uname -r 5.5.0-kali2-amd64 kali@kali:~$ NOTE: The output of uname -r may be different depending on the system architecture. As always, should you come across any bugs in Kali, please submit a report on our bug tracker. We’ll never be able to fix what we don’t know is broken! And Twitter is not a Bug Tracker!
Read more
  • 0
  • 0
  • 1353
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-puppets-2019-state-of-devops-report-highlight-security-integration-into-devops-practices-result-into-higher-business-outcome
Amrata Joshi
27 Sep 2019
5 min read
Save for later

Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome

Amrata Joshi
27 Sep 2019
5 min read
On Wednesday, Puppet announced the findings of its eighth annual State of DevOps Report. This report reveals practices and patterns that can help organisations in integrating security into the software development lifecycle. As per Puppet’s 2019 State of DevOps Report, 22% of the firms at the highest level of security integration has reached an advanced stage of DevOps maturity, while 6% of the firms are without security integration.  While talking about the firms with an overall ‘significant to full’ integration status, according to the report findings, Europe is ahead of the Asia Pacific regions and the US with 43% in contrast to 38% or less. Alanna Brown, Senior Director of Community and Developer Relations at Puppet and author of the State of DevOps report, said, “The DevOps principles that drive positive outcomes for software development — culture, automation, measurement and sharing — are the same principles that drive positive security outcomes. Organisations that are serious about improving their security practices and posture should start by adopting DevOps practices.”  Brown added, “This year's report affirms our belief that organisations who ignore or deprioritise DevOps, are the same companies who have the lowest level of security integration and who will be hit the hardest in the case of a breach.” Key findings of State of the DevOps Report 2019 According to the report, firms that are at the highest level of security integration can deploy to production on-demand at a higher rate as compared to firms at all other levels of integration. Currently, 61% of firms are able to do so and while comparing with organisations that have not integrated security at all, less than half (49%) can deploy on-demand. According to 82% of survey respondents at firms with the highest level of security integration, security practices and policies to improve their firm’s security posture. While comparing this with respondents at firms without security integration, only 38% had the level of confidence. Firms that are integrating security throughout their lifecycle are more than twice as likely to stop a push to production for a medium security vulnerability. In the middle stages of evolution of security integration, delivery and security teams experience higher friction while collaborating where software delivery slows down and the audit issues increase. The report findings state that friction is higher for respondents who work in security jobs than those who work in non-security jobs. But if they continue working, they will get the results of their hard work faster. Hypothesis on remediation time As per the hypothesis, just 7% of the total respondents can remediate a critical vulnerability in less than an hour.  32% of the total respondents can remediate in one hour to less than one day.  33% of the total respondents can remediate in one day to less than one week.   Michael Stahnke, VP of Platform Engineering, CircleCI, said, “It shouldn’t be a surprise to anyone that integrating security into the software delivery lifecycle requires intentional effort and deep collaboration across teams.” Stahnke added, “What did surprise me, however, was that the practices that promote cross-team collaboration had the biggest impact on the teams’ confidence in the organisation’s security posture. Turns out, empathy and trust aren’t automatable.” Factors responsible for the success of an organizational structure to be DevOps ready The flexibility of the current organizational structure. The organizational culture.  How isolated the different functions are.  Skillsets of your team.  The relationship between team leaders and teams. Best practices for improving security posture Development and security teams collaborate on threat models. Security tools are integrated in the development integration pipeline such that the engineers feel confident that they are not involving any known security problems into their codebases. Security requirements, both functional as well as non-functional should be prioritised as part of the product backlog. Security experts should evaluate automated tests and review changes in high-risk areas of the code like cryptography, authentication systems, etc. Before the deployment, infrastructure-related security policies should be reviewed. Andrew Plato, CEO, Anitian, said, “Puppet’s State of DevOps report provides outstanding insights into the ongoing challenges of integrating security and DevOps teams.”  Plato added, “While the report outlines many problems, it also highlights the gains that arise when DevOps and security are fully integrated. These benefits include increased security effectiveness, more robust risk management, and tighter alignment of business and security goals. These insights mirror our experiences at Anitian implementing our security automation platform. We are proud to be a sponsor of the State of DevOps report as well as a technology partner with Puppet. We anticipate referencing this report regularly in our engagement with our customers as well as the DevOps and security communities.” To summarize, organizations that are focusing on improving their security posture and practices should adopt DevOps practices just as the organizations at the highest levels of DevOps acceptance have fully integrated security practices.  Check out the complete 2019 State of DevOps Report here. Other interesting news in cloud & networking GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020  
Read more
  • 0
  • 0
  • 2175

article-image-facebook-is-reportedly-working-on-threads-app-an-extension-of-instagrams-close-friends-feature-to-take-on-snapchat
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Facebook is reportedly working on Threads app, an extension of Instagram's 'Close friends' feature to take on Snapchat

Amrata Joshi
02 Sep 2019
3 min read
Facebook is seemingly working on a new messaging app called Threads that would help users to share their photos, videos, location, speed, and battery life with only their close friends, The Verge reported earlier this week. This means users can selectively share content with their friends while not revealing to others the list of close friends with whom the content is shared. The app currently does not display the real-time location but it might notify by stating that a friend is “on the move” as per the report by The Verge. How do Threads work? As per the report by The Verge,  Threads app appears to be similar to the existing messaging product inside the Instagram app. It seems to be an extension of the ‘Close friends’ feature for Instagram stories where users can create a list of close friends and make their stories just visible to them.  With Threads, users who have opted-in for ‘automatic sharing’ of updates will be able to regularly show their status updates and real-time information  in the main feed to their close friends.. The auto-sharing of statuses will be done using the mobile phone sensors.  Also, the messages coming from your friends would appear in a central feed, with a green dot that will indicate which of your friends are currently active/online. If a friend has posted a story recently on Instagram, you will be able to see it even from Threads app. It also features a camera, which can be used to capture photos and videos and send them to close friends. While Threads are currently being tested internally at Facebook, there is no clarity about the launch of Threads. Direct’s revamped version or Snapchat’s potential competitor? With Threads, if Instagram manages to create a niche around the ‘close friends’, it might shift a significant proportion of Snapchat’s users to its platform.  In 2017, the team had experimented with Direct, a standalone camera messaging app, which had many filters that were similar to Snapchat. But this year in May, the company announced that they will no longer be supporting Direct. Threads look like a Facebook’s second attempt to compete with Snapchat. https://twitter.com/MattNavarra/status/1128875881462677504 Threads app focus on strengthening the ‘close friends’ relationships might promote more of personal data sharing including even location and battery life. This begs the question: Is our content really safe? Just three months ago, Instagram was in the news for exposing personal data of millions of influencers online. The exposed data included contact information of Instagram influencers, brands and celebrities https://twitter.com/hak1mlukha/status/1130532898359185409 According to Instagram’s current Terms of Use, it does not get ownership over the information shared on it. But here’s the catch, it also states that it has the right to host, use, distribute, run, modify, copy, publicly perform or translate, display, and create derivative works of user content as per the user’s privacy settings. In essence, the platform has a right to use the content we post.  Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong protests   Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 1571

article-image-apple-previews-macos-catalina-10-15-beta-featuring-apple-music-tv-apps-security-zsh-shell-driverkit-and-much-more
Amrata Joshi
04 Jun 2019
6 min read
Save for later

Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!

Amrata Joshi
04 Jun 2019
6 min read
Yesterday, Apple previewed the next version of macOS called Catalina, at its ongoing Worldwide Developers Conference (WWDC) 2019. macOS 10.15 or Catalina comes with new features, apps, and technology for developers. With Catalina, Apple is replacing iTunes with entertainment apps such as  Apple PodcastsApple Music, and the Apple TV app. macOS Catalina is expected to be released this fall. Craig Federighi, Apple’s senior vice president of Software Engineering, said, “With macOS Catalina, we’re bringing fresh new apps to the Mac, starting with new standalone versions of Apple Music, Apple Podcasts and the Apple TV app.” He further added, “Users will appreciate how they can expand their workspace with Sidecar, enabling new ways of interacting with Mac apps using iPad and Apple Pencil. And with new developer technologies, users will see more great third-party apps arrive on the Mac this fall.” What’s new in macOS Catalina Sidecar feature Sidecar is a new feature in the macOS 10.15 that helps users in extending their Mac desktop with the help of iPad as a second display or as a high-precision input device across the creative Mac apps. Users can use their iPad for drawing, sketching or writing in any Mac app that supports stylus input by pairing it with an Apple Pencil. Sidecar can be used for editing video with Final Cut Pro X, marking up iWork documents or drawing with Adobe Illustrator iPad app support Catalina comes with iPad app support which is a new way for developers to port their iPad apps to Mac. Previously, this project was codenamed as “Marzipan,” but it’s now called Catalyst. Developers will now be able to use Xcode for targeting their iPad apps at macOS Catalina. Twitter is planning on porting its iOS Twitter app to Mac, and even Atlassian is planning to bring its Jira iPad app to macOS Catalina. Though it is still not clear how many developers are going to support this porting, Apple is encouraging developers to port their iPad apps to the Mac. https://twitter.com/Atlassian/status/1135631657204166662 https://twitter.com/TwitterSupport/status/1135642794473558017 Apple Music Apple Music is a new music app that will help users discover new music with over 50 million songs, playlists, and music videos. Users will now have access to their entire music library, including the songs they have downloaded, purchased or ripped from a CD. Apple TV Apps The Apple TV app features Apple TV channels, personalized recommendations, and more than 100,000 iTunes movies and TV shows. Users can browse, buy or rent and also enjoy 4K HDR and Dolby Atmos-supported movies. It also comes with a Watch Now section that has the Up Next option, where users can easily keep track of what they are currently watching and then resume on any screen. Apple TV+ and Apple’s original video subscription service will be available in the Apple TV app this fall. Apple Podcasts Apple Podcasts app features over 700,000 shows in its catalog and comes with an option for automatically being notified of new episodes as soon as they become available. This app comes with new categories, curated collections by editors around the world and even advanced search tools that help in finding episodes by the host, guest or even discussion topics. Users can now easily sync their media to their devices using a cable in the new entertainment apps. Security In macOS Catalina, Gatekeeper checks all the apps for known security issues, and the new data protections now require all apps to get permission before accessing user documents. Approve that comes with Apple Watch helps users to approve security prompts by tapping the side button on their Apple Watch. With the new Find My app, it is easy to find the location of a lost or stolen Mac and it can be anonymously relayed back to its owner by other Apple devices, even when offline. Macs will be able to send a secure Bluetooth signal occasionally, which will be used to create a mesh network of other Apple devices to help people track their products. So, a map will populate of where the device is located and this way it will be useful for the users in order to track their device. Also, all the Macs will now come with the T2 Security Chip support Activation Lock which will make them less attractive to thieves. DriverKit MacOS Catalina SDK 10.15+ beta comes with DriverKit framework which can be used for creating device drivers that the user installs on their Mac. Drivers built with DriverKit can easily run in user space for improved system security and stability. This framework also provides C++ classes for IO services, memory descriptors, device matching, and dispatch queues. DriverKit further defines IO-appropriate types for numbers, strings, collections, and other common types. You use these with family-specific driver frameworks like USBDriverKit and HIDDriverKit. zsh shell on Mac With macOS Catalina beta, Mac uses zsh as the interactive shell and the default login shell and is available currently only to the members of the Apple Developer Program. Users can now make zsh as the default in earlier versions of macOS as well. Currently, bash is the default shell in macOS Mojave and earlier. Zsh shell is also compatible with the Bourne shell (sh) and bash. The company is also signalling that developers should start moving to zsh on macOS Mojave or earlier. As bash isn’t a modern shell, so it seems the company thinks that switching to something less aging would make more sense. https://twitter.com/film_girl/status/1135738853724000256 https://twitter.com/_sjs/status/1135715757218705409 https://twitter.com/wongmjane/status/1135701324589256704 Additional features Safari now has an updated start page that uses Siri Suggestions for elevating frequently visited bookmarks, sites, iCloud tabs, reading list selections and links sent in messages. macOS Catalina comes with an option to block an email from a specified sender or even mute an overly active thread and unsubscribe from commercial mailing lists. Reminders have been redesigned and now come with a new user interface that makes it easier for creating, organizing and tracking reminders. It seems users are excited about the announcements made by the company and are looking forward to exploring the possibilities with the new features. https://twitter.com/austinnotduncan/status/1135619593165189122 https://twitter.com/Alessio____20/status/1135825600671883265 https://twitter.com/MasoudFirouzi/status/1135699794360438784 https://twitter.com/Allinon85722248/status/1135805025928851457 To know more about this news, check out Apple’s post. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case
Read more
  • 0
  • 0
  • 3081

article-image-firefox-releases-v66-0-4-and-60-6-2-to-fix-the-expired-certificate-problem-that-ended-up-disabling-add-ons
Bhagyashree R
06 May 2019
3 min read
Save for later

Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons

Bhagyashree R
06 May 2019
3 min read
Last week on Friday, Firefox users were left infuriated when all their extensions were abruptly disabled. Fortunately, Mozilla has fixed this issue in their yesterday’s releases, Firefox 66.0.4 and Firefox 60.6.2. https://twitter.com/mozamo/status/1124484255159971840 This is not the first time when Firefox users have encountered such type of problems. A similar issue was reported back in 2016 and it seems that they did not take proper steps to prevent the issue from recurring. https://twitter.com/Theobromia/status/1124791924626313216 Multiple users were reporting that all add-ons were disabled on Firefox because of failed verification. Users were also unable to download any new add-ons and were shown  "Download failed. Please check your connection" error despite having a working connection. This happened because the certificate with which the add-ons were signed expired. The timestamp mentioned in the certificates were: Not Before: May 4 00:09:46 2017 GMT Not After : May 4 00:09:46 2019 GMT Mozilla did share a temporary hotfix (“hotfix-update-xpi-signing-intermediate-bug-1548973”) before releasing a product with the issue permanently fixed. https://twitter.com/mozamo/status/1124627930301255680 To apply this hotfix automatically, users need to enable Studies, a feature through which Mozilla tries out new features before they release to the general users. The Studies feature is enabled by default, but if you have previously opted out of it, you can enable it by navigating to Options | Privacy & Security | Allow Firefox to install and run studies. https://twitter.com/mozamo/status/1124731439809830912 Mozilla released Firefox 66.0.4 for desktop and Android users and Firefox 60.6.2 for ESR (Extended Support Release) users yesterday with a permanent fix to this issue. These releases repair the certificate to re-enable web extensions that were disabled because of the issue. There are still some issues that need to be resolved, which Mozilla is currently working on: A few add-ons may appear unsupported or not appear in 'about:addons'. Mozilla assures that the add-ons data will not be lost as it is stored locally and can be recovered by re-installing the add-ons. Themes will not be re-enabled and will switch back to default. If a user’s home page or search settings are customized by an add-on it will be reset to default. Users might see that Multi-Account Containers and Facebook Container are reset to their default state. Containers is a functionality that allows you to segregate your browsing activities within different profiles. As an aftereffect of this certificate issue, data that might be lost include the configuration data regarding which containers to enable or disable, container names, and icons. Many users depend on Firefox’s extensibility property to get their work done and it is obvious that this issue has left many users sour. “This is pretty bad for Firefox. I wonder how much people straight up & left for Chrome as a result of it,” a user commented on Hacker News. Read the Mozilla Add-ons Blog for more details. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 3134
article-image-openbsd-6-4-released
Savia Lobo
19 Oct 2018
3 min read
Save for later

OpenBSD 6.4 released

Savia Lobo
19 Oct 2018
3 min read
Yesterday, the founder of OpenBSD, Theo de Raadt announced the release of a new version of its free and open-source security-focused OS, OpenBSD 6.4. The interesting feature in the OpenBSD 6.4 is the unveil() system call, which allows applications to sandbox themselves, blocking their own access to the file system. This is especially useful for programs which operate on unknown data which may try to exploit or crash the application. OpenBSD 6.4 also includes many driver improvements, which allow OpenSSH's configuration files to use service names instead of port numbers. Also, the Clang compiler will now replace some risky ROP instructions with safe alternatives. Other features and improvements in OpenBSD 6.4 Improved hardware support The new version includes an ACPI support on OpenBSD/arm64 platforms. New acpipci(4/arm64) driver providing support for PCI host bridges based on information provided by ACPI. Added a sensor for port replicator status to acpithinkpad(4). Support for Allwinner H3 and A64 SoC in scitemp(4). New bnxt(4) driver for Broadcom NetXtreme-C/E PCI Express Ethernet adapters based on the Broadcom BCM573xx and BCM574xx chipsets. Enabled on amd64 and arm64 platforms. The radeondrm(4) driver was updated to code based on Linux 4.4.155. IEEE 802.11 wireless stack improvements The OpenBSD 6.4 has a new 'join' feature (managed with ifconfig(8)) using which the kernel manages automatic switching between different WiFi networks. Also, the ifconfig(8) scan performance has been improved for many devices. Generic network stack improvements Addition of a new eoip(4) interface for the MikroTik Ethernet over IP (EoIP) encapsulation protocol. Also, new global IPsec counters are available via netstat(1). The trunk(4) now has LACP administrative knobs for mode, timeout, system priority, port priority, and ifq priority. Security improvements OpenBSD 6.4 introduces a new RETGUARD security mechanism on amd64 and arm64. Here, one can use per-function random cookies to protect access to function return instructions, making them harder to use in ROP gadgets. It also includes an added SpectreRSB mitigation on amd64 and an added Intel L1 Terminal Fault mitigation on amd64. clang(1) includes a pass that identifies common instructions which may be useful in ROP gadgets and replaces them with safe alternatives on amd64 and i386. The Retpoline mitigation against Spectre Variant 2 has been enabled in clang(1) and in assembly files on amd64 and i386. The amd64 now uses eager-FPU switching to prevent FPU state information speculatively leaking across protection boundaries. Simultaneous MultiThreading (SMT) uses core resources in a shared and unsafe manner, it is now disabled by default. It can be enabled with the new hw.smt sysctl(2) variable. The audio recording feature is now disabled by default and can be enabled with the new kern.audio.record sysctl(2) variable. The getpwnam(3) and getpwuid(3) no longer return a pointer to static storage but a managed allocation which gets unmapped. This allows detection of access to stale entries. sshd(8) includes improved defence against user enumeration attacks. To know more about the other features in detail, head over to the OpenBSD 6.4 release log. KUnit: A new unit testing framework for Linux Kernel The kernel community attempting to make Linux more secure  
Read more
  • 0
  • 0
  • 2624

article-image-blazor-0-6-release-and-what-it-means-for-webassembly
Amarabha Banerjee
05 Oct 2018
3 min read
Save for later

Blazor 0.6 release and what it means for WebAssembly

Amarabha Banerjee
05 Oct 2018
3 min read
WebAssembly is changing the way we use develop applications for the web. Graphics heavy applications, browser based games, and interactive data visualizations seem to have found a better way to our UI - the WebAssembly way. The latest Blazor 0.6 experimental release from Microsoft is an indication that Microsoft has identified WebAssembly as one of the upcoming trends and extended support to their bevy of developers. Blazor is an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly. Blazor promises to greatly simplify the task of building, fast and beautiful single-page applications that run in any browser. The following image shows the architecture of Blazor. Source: MSDN Blazor has its own JavaScript format - Blazor.js. It uses mono, an open source implementation of Microsoft’s .NET Framework based on the ECMA standards for C# and the Common Language Runtime (CLR). It also uses Razor, a template engine that combines C# with HTML to create dynamic web content. Together, Blazor is promising to create dynamic and fast web apps without using the popular JavaScript frontend frameworks. This reduces the learning curve requirement for the existing C# developers. Microsoft has released the 0.6 experimental version of Blazor on October 2nd. This release includes new features for authoring templated components and enables using server-side Blazor with the Azure SignalR Service. Another important news from this release is that the server side Blazor model will be included as Razor components in the .Net core 3.0 release. The major highlights of this release are: Templated components Define components with one or more template parameters Specify template arguments using child elements Generic typed components with type inference Razor templates Refactored server-side Blazor startup code to support the Azure SignalR Service Now the important question is how is this release going to fuel the growth of WebAssembly based web development? The answer is that probably it will take some time for WebAssembly to become mainstream because this is just the alpha release which means that there will be plenty of changes before the final release comes. But why Blazor is the right step ahead can be explained by the fact that unlike former Microsoft platforms like Silverlight, it does not have its own rendering engine. Hence pixel rendering in the browser is not its responsibility. That’s what makes it lightweight. Blazor uses the browser’s DOM to display data. However, the C# code running in WebAssembly cannot access the DOM directly. It has to go through JavaScript. The process looks like this presently. Source: Learn Blazor The way this process happens, might change with the beta and subsequent releases of Blazor. Just so that the intermediate JavaScript layer can be avoided. But that’s what WebAssembly is at present. It is a bridge between your code and the browser - which evidently runs on JavaScript. Blazor can prove to be a very good supportive tool to fuel the growth of WebAssembly based apps. Why is everyone going crazy over WebAssembly? Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux Unity Benchmark report approves WebAssembly load times and performance in popular web browsers
Read more
  • 0
  • 0
  • 5032

article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 3466
article-image-bloomberg-says-google-mastercard-covertly-track-customers-offline-retail-habits-via-a-secret-million-dollar-ad-deal
Melisha Dsouza
31 Aug 2018
3 min read
Save for later

Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal

Melisha Dsouza
31 Aug 2018
3 min read
Google and Mastercard have apparently signed a deal that was kept as a secret from most of the two billion Mastercard holders. The deal allows Google to track users’ offline buying habits. The search engine giant has been stalking offline purchases made in stores through Mastercard purchase histories and correlating them with online ad interactions. Both companies haven’t released an official statement about the partnership to the public about the arrangement. In May 2017, Google announced a service called “Store Sales Measurement”, which recorded about 70 percent of US credit and debit card transactions through third-party partnerships.  Selected Google advertisers had access to this new tool, which tracked whether the ads they ran online led to a sale at a physical store in the U.S. As reported by Bloomberg, an anonymous source familiar to the deal stated that Mastercard also provided Google with customers’ transaction data thus contributing to the 70% share. It’s highly probable that other credit card companies, also contribute the data of their customer transactions Advertisers spend lavishly on Google to gain valuable insights into the link between digital ads, a website visit or an online purchase. This supports the speculations that the deal is profitable for Google. How do they track how you shop? A customer logs into his/her Google account on the web and clicks on any Google ad.  They may often browse a certain item without purchasing it right away. Within 30 days, if he/she uses their MasterCard to buy the same item in a physical store, Google will send the advertiser a report about the product and the effectiveness of its ads, along with a section for “offline revenue” letting the advertiser know the retail sales. All of this raises the question on how much does Google actually know about your personal details? Both Google and Mastercard have clarified to The Verge that the data is anonymized in order to protect personally identifiable information. However, Google declined to confirm the deal with Mastercard. A Google spokesperson released a statement  to MailOnline saying: "Before we launched this beta product last year, we built a new, double-blind encryption technology that prevents both Google and our partners from viewing our respective users’ personally identifiable information. We do not have access to any personal information from our partners’ credit and debit cards, nor do we share any personal information with our partners. Google users can opt-out with their Web and App Activity controls, at any time.” This new controversy closely follows the heels of an earlier debacle last week when it was discovered that Google is providing advertisers with location history data collated from Google Maps and other more granular data points collected by its Android operating system. But this data never helped in understanding whether a customer actually purchased a product. Toggling off "Web and App Activity"  (enabled by default) will help in turning this feature off. The category also controls whether Google can pinpoint your exact GPS coordinates through Maps data and browser searches and whether it can crosscheck a customer's offline purchases with their online ad-related activity. Read more in-depth coverage on this news first reported at Bloomberg. Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns Google Titan Security key with secure FIDO two factor authentication is now available for purchase  
Read more
  • 0
  • 0
  • 2938

article-image-announcing-cloud-build-googles-new-continuous-integration-and-delivery-ci-cd-platform
Vijin Boricha
27 Jul 2018
2 min read
Save for later

Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform

Vijin Boricha
27 Jul 2018
2 min read
In today’s world no software developer is expected to wait for long release time and development cycles, all thanks to DevOps. Cloud which are popular for providing feasible infrastructure across different organizations can now offer better solutions with the help of DevOps. Applications can have bug fixes and updates almost everyday but this update cycles require a CI/CD framework. Google recently released its all new continuous integration/continuous delivery framework Cloud Build at Google Cloud Next ’18 in San Francisco. Cloud Build is a complete continuous integration and continuous delivery platform that helps you build software at scale across all languages. It gives developers complete control over a variety of environments such as VMs, serverless, Firebase or Kubernetes. Google’s Cloud Build supports Docker, giving developers the option of automating deployments to Google Kubernetes Engine or Kubernetes for continuous delivery. It also supports the use of triggers for application deployment which helps launch an update whenever certain conditions are met. Google also tried to eliminate the pain of managing build servers by providing a free version of Cloud Build with up to 120 build minutes per day including up to 10 concurrent builds. After the user has exhausted the first free 120 build minutes, additional build minutes will be charged at $0.0034 per minute. Another plus point of Cloud Build is that it automatically identifies package vulnerabilities before deployment along with allowing users to run builds on local machines and later deploy in the cloud. Incase of issues or problems, CloudBuild provides detailed insights letting you ease debugging via build errors and warnings. It also provides an option of filtering build results using tags or queries to identify time consuming tests or slow performing builds. Key features of Google Cloud Build Simpler and faster commit to deploy time Supports language agnostic builds Options to create pipelines to automate deployments Flexibility to define custom workflow Control build access with Google Cloud security Check out the Google Cloud Blog if you find want to learn more about how to start implementing Google's CI/CD offerings. Related Links Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google’s event-driven serverless platform, Cloud Function, is now generally available Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 2743