Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Cloud Computing

121 Articles
article-image-automating-ocr-and-translation-with-google-cloud-functions-a-step-by-step-guide
Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
Save for later

Automating OCR and Translation with Google Cloud Functions: A Step-by-Step Guide

Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
This article is an excerpt from the book, "Google Cloud Associate Cloud Engineer Certification and Implementation Guide", by Agnieszka Koziorowska, Wojciech Marusiak. This book serves as a guide for students preparing for ACE certification, offering invaluable practical knowledge and hands-on experience in implementing various Google Cloud Platform services. By actively engaging with the content, you’ll gain the confidence and expertise needed to excel in your certification journey.Introduction In this article, we will walk you through an example of implementing Google Cloud Functions for optical character recognition (OCR) on Google Cloud Platform. This tutorial will demonstrate how to automate the process of extracting text from an image, translating the text, and storing the results using Cloud Functions, Pub/Sub, and Cloud Storage. By leveraging Google Cloud Vision and Translation APIs, we can create a workflow that efficiently handles image processing and text translation. The article provides detailed steps to set up and deploy Cloud Functions using Golang, covering everything from creating storage buckets to deploying and running your function to translate text. Google Cloud Functions Example Now that you’ve learned what Cloud Functions is, I’d like to show you how to implement a sample Cloud Function. We will guide you through optical character recognition (OCR) on Google Cloud Platform with Cloud Functions. Our use case is as follows: 1. An image with text is uploaded to Cloud Storage. 2. A triggered Cloud Function utilizes the Google Cloud Vision API to extract the text and identify the source language. 3. The text is queued for translation by publishing a message to a Pub/Sub topic. 4. A Cloud Function employs the Translation API to translate the text and stores the result in the translation queue. 5. Another Cloud Function saves the translated text from the translation queue to Cloud Storage. 6. The translated results are available in Cloud Storage as individual text files for each translation. We need to download the samples first; we will use Golang as the programming language. Source files can be downloaded from – https://github.com/GoogleCloudPlatform/golangsamples. Before working with the OCR function sample, we recommend enabling the Cloud Translation API and the Cloud Vision API. If they are not enabled, your function will throw errors, and the process will not be completed. Let’s start with deploying the function: 1. We need to create a Cloud Storage bucket.  Create your own bucket with unique name – please refer to documentation on bucket naming under following link: https://cloud.google.com/storage/docs/buckets We will use the following code: gsutil mb gs://wojciech_image_ocr_bucket 2. We also need to create a second bucket to store the results: gsutil mb gs://wojciech_image_ocr_bucket_results 3. We must create a Pub/Sub topic to publish the finished translation results. We can do so with the following code: gcloud pubsub topics create YOUR_TOPIC_NAME. We used the following command to create it: gcloud pubsub topics create wojciech_translate_topic 4. Creating a second Pub/Sub topic to publish translation results is necessary. We can use the following code to do so: gcloud pubsub topics create wojciech_translate_topic_results 5. Next, we will clone the Google Cloud GitHub repository with some Python sample code: git clone https://github.com/GoogleCloudPlatform/golang-samples 6. From the repository, we need to go to the golang-samples/functions/ocr/app/ file to be able to deploy the desired Cloud Function. 7. We recommend reviewing the included go files to review the code and understand it in more detail. Please change the values of your storage buckets and Pub/Sub topic names. 8. We will deploy the first function to process images. We will use the following command: gcloud functions deploy ocr-extract-go --runtime go119 --trigger-bucket wojciech_image_ocr_bucket --entry-point  ProcessImage --set-env-vars "^:^GCP_PROJECT=wmarusiak-book- 351718:TRANSLATE_TOPIC=wojciech_translate_topic:RESULT_ TOPIC=wojciech_translate_topic_results:TO_LANG=es,en,fr,ja" 9. After deploying the first Cloud Function, we must deploy the second one to translate the text.  We can use the following code snippet: gcloud functions deploy ocr-translate-go --runtime go119 --trigger-topic wojciech_translate_topic --entry-point  TranslateText --set-env-vars "GCP_PROJECT=wmarusiak-book- 351718,RESULT_TOPIC=wojciech_translate_topic_results" 10. The last part of the complete solution is a third Cloud Function that saves results to Cloud Storage. We will use the following snippet of code to do so: gcloud functions deploy ocr-save-go --runtime go119 --triggertopic wojciech_translate_topic_results --entry-point SaveResult  --set-env-vars "GCP_PROJECT=wmarusiak-book-351718,RESULT_ BUCKET=wojciech_image_ocr_bucket_results" 11. We are now free to upload any image containing text. It will be processed first, then translated and saved into our Cloud Storage bucket. 12. We uploaded four sample images that we downloaded from the Internet that contain some text. We can see many entries in the ocr-extract-go Cloud Function’s logs. Some Cloud Function log entries show us the detected language in the image and the other extracted text:  Figure 7.22 – Cloud Function logs from the ocr-extract-go function 13. ocr-translate-go translates detected text in the previous function:  Figure 7.23 – Cloud Function logs from the ocr-translate-go function 14. Finally, ocr-save-go saves the translated text into the Cloud Storage bucket:  Figure 7.24 – Cloud Function logs from the ocr-save-go function 15. If we go to the Cloud Storage bucket, we’ll see the saved translated files:  Figure 7.25 – Translated images saved in the Cloud Storage bucket 16. We can view the content directly from the Cloud Storage bucket by clicking Download next to the file, as shown in the following screenshot:  Figure 7.26 – Translated text from Polish to English stored in the Cloud Storage bucket Cloud Functions is a powerful and fast way to code, deploy, and use advanced features. We encourage you to try out and deploy Cloud Functions to understand the process of using them better. At the time of writing, Google Cloud Free Tier offers a generous number of free resources we can use. Cloud Functions offers the following with its free tier: 2 million invocations per month (this includes both background and HTTP invocations) 400,000 GB-seconds, 200,000 GHz-seconds of compute time 5 GB network egress per month Google Cloud has comprehensive tutorials that you can try to deploy. Go to https://cloud.google.com/functions/docs/tutorials to follow one. Conclusion In conclusion, Google Cloud Functions offer a powerful and scalable solution for automating tasks like optical character recognition and translation. Through this example, we have demonstrated how to use Cloud Functions, Pub/Sub, and the Google Cloud Vision and Translation APIs to build an end-to-end OCR and translation pipeline. By following the provided steps and code snippets, you can easily replicate this process for your own use cases. Google Cloud's generous Free Tier resources make it accessible to get started with Cloud Functions. We encourage you to explore more by deploying your own Cloud Functions and leveraging the full potential of Google Cloud Platform for serverless computing. Author BioAgnieszka is an experienced Systems Engineer who has been in the IT industry for 15 years. She is dedicated to supporting enterprise customers in the EMEA region with their transition to the cloud and hybrid cloud infrastructure by designing and architecting solutions that meet both business and technical requirements. Agnieszka is highly skilled in AWS, Google Cloud, and VMware solutions and holds certifications as a specialist in all three platforms. She strongly believes in the importance of knowledge sharing and learning from others to keep up with the ever-changing IT industry.With over 16 years in the IT industry, Wojciech is a seasoned and innovative IT professional with a proven track record of success. Leveraging extensive work experience in large and complex enterprise environments, Wojciech brings valuable knowledge to help customers and businesses achieve their goals with precision, professionalism, and cost-effectiveness. Holding leading certifications from AWS, Alibaba Cloud, Google Cloud, VMware, and Microsoft, Wojciech is dedicated to continuous learning and sharing knowledge, staying abreast of the latest industry trends and developments.
Read more
  • 0
  • 0
  • 432

article-image-operations-and-infrastructure-engineering-in-2019-what-really-mattered
Richard Gall
18 Dec 2019
6 min read
Save for later

Operations and infrastructure engineering in 2019: what really mattered

Richard Gall
18 Dec 2019
6 min read
Everything is unreliable, right? If we didn’t realise it before, 2019 was the year when we fully had to accept the reality of the systems we’re building and managing. That was scary, sure, but it was also liberating. But we shouldn’t get carried away: given how highly distributed software systems are now part and parcel in a range of different industries, the issue of reliability and resilience isn’t purely an academic issue: in many instances, it’s urgent and critical. That makes the work of managing and building software infrastructure an incredibly vital role. Back in 2015 I wrote that Docker had turned us all into SysAdmins, but on reflection it may be more accurate to say that we’ve now entered a world where cloud and the infrastructure-as-code revolution has turned everyone into a software developer. Kubernetes is everywhere Kubernetes is arguably the definitive technology of 2019. With the move to containers now fully mainstream, Kubernetes is an integral in helping engineers to deploy and manage containers at scale. The other important element to Kubernetes is that it all but kills off dreaded infrastructure lock-in. It gives you the freedom to build across different environments, and inside a more heterogeneous software infrastructure. From a tooling and skill set perspective that’s a massive win. Although conversations about flexibility and agility have been ongoing in the tech industry for years, with Kubernetes we are finally getting to a place where that’s a reality. This isn’t to say it’s all plain sailing - Kubernetes’ complexity is a point of complaint for many, with many people suggesting that compared to, say, Docker, the developer experience leaves a lot to be desired. But insofar as DevOps and cloud-native have almost become the norm for many engineering teams, Kubernetes casts a huge shadow. Indeed, even if it’s not the right option for you right now, it’s hard to escape the fact that understanding it, and being open to using it in the future, is crucial. Find an extensive range of Kubernetes content in our new cloud bundles.  Serverless and NoOps This year serverless has really come into its own. Although it was certainly gaining traction in 2018, the last 12 months have demonstrated its value as more and more teams have been opting to forgo servers completely. There have been a few arguments about whether serverless is going to kill off containers. It’s not hard to see where this comes from, but in reality there’s no chance that this is going to happen. The way to think of serverless is to see it as an additional option that can be used when speed and agility are particularly important. For large-scale application development and deployment, containers running on ‘traditional’ cloud servers will be the dominant architectural approach. The companion trend to serverless is NoOps. Given the level of automation and abstraction that serverless can give you, the need to configure environments to ensure code runs properly all but disappears - code runs through ‘functions’ that get fired when needed. So, the thinking goes, the need for operations becomes very small indeed. But before anyone starts worrying about their jobs, the death of operations is greatly exaggerated. As noted above, serverless is just one option - it’s not redefining the architectural landscape. It might mean that the way we understand ‘ops’ evolves (just as ‘dev’ has), but it certainly won’t kill it off. Discover and search serverless eBooks and videos on the Packt store. Chaos engineering In the introduction I mentioned that one of the strange quandaries of our contemporary distributed software world is that we’ve essentially made things more unreliable at a time when software systems are being used in ever more critical applications. From healthcare to self-driving cars, we’re entering a world where unreliability is both more common and potentially more damaging. This is where chaos engineering comes in. Although it first appeared on ThoughtWorks Radar back in November 2017 and hasn’t yet moved out of its ‘Trial’ quadrant, in reality chaos engineering has been manifesting itself in a whole host of ways in 2019. Indeed, it’s possible that the term itself is misleading. While it suggests a wholesale methodology, in truth, there are different ways in which the core principles behind it - essentially stress-testing your software in order to manage unpredictability and improve resilience - are being used in different ways for both testing and security purposes. Tools like Gremlin have done a lot to help promote chaos engineering and make it more accessible to organizations that maybe wouldn't see themselves as having the resources to perform cutting-edge approaches. It appears the ground-work has been done, which means it will be interesting to see how it evolves in 2020. Observability: service meshes and tracing One of the biggest challenges when dealing with complex software systems - and one of the reasons why they are necessarily unreliable - is because it can be difficult (sometimes impossible) to get an understanding of what’s actually going on. This is why the debate around observability and monitoring has moved on. It’s no longer enough to have a set of discrete logs and metrics. Chances are that they won’t capture the subtleties of what’s happening, or won’t be able to provide you with context that helps you to actually understand where errors are coming from. What’s more, a lack of observability and the wrong monitoring set up can cause all sorts of issues inside a team. At a time when the role of the on call developer has never been more discussed and, indeed, important, ensuring there’s a level of transparency is the only way to guarantee that all developers are able to support each other and solve problems as they emerge. From this perspective, then, observability has a cultural impact as much as it does a technical one. Learn distributed tracing with Yuri Shkuro from Uber's observability engineering team: find Mastering Distributed Tracing on the Packt store.         Not sure what to learn for 2020? Start exploring thousands of tech eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 3462

article-image-ansible-role-patterns-and-anti-patterns-by-lee-garrett-its-debian-maintainer
Vincy Davis
16 Dec 2019
6 min read
Save for later

Ansible role patterns and anti-patterns by Lee Garrett, its Debian maintainer

Vincy Davis
16 Dec 2019
6 min read
At DebConf held last year, Lee Garrett, a Debian maintainer for Ansible talked about some of the best practices in the open-source, configuration management tool. Ansible runs on Unix-like systems and configures both Unix-like and Microsoft Windows. It uses a simple syntax written in YAML, which is a human-readable data serialization language and uses SSH to connect to the node machines. Ansible is a helpful tool for creating a group of machines, describing their configuration and actions. Ansible is used to implement software provisioning, application-deployment security, compliance, and orchestration solutions. When compared to other configuration management tools like Puppet, Chef, SaltStack, etc, Ansible is very easy to setup. Garett says that due to its agentless nature, users can easily control any machine with an SSH daemon using Ansible. This will assist users in controlling any Debian installed machine using Ansible. It also supports the configuration of many things like networking equipment and Windows machines. Interested in more of Ansible? [box type="shadow" align="" class="" width=""]Get an insightful understanding of the design and development of Ansible from our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. This book will help you grasp the true power of Ansible automation engine by tackling complex, real-world actions with ease. The book also presents the fully automated Ansible playbook executions with encrypted data.[/box] What are Ansible role patterns? Ansible uses a playbook as an entry point for provisioning and defines automation through the YAML format. A playbook requires a predefined pattern to organize them and also needs other files to facilitate the sharing and reusing of provisioning. This is when a ‘role’ comes into the picture.  An Ansible role which is an independent component allows the reuse of common configuration steps. It contains a set of tasks that can be used to configure a host such that it will serve a certain function like configuring a service. Roles are defined using YAML files with a predefined directory structure. A role directory structure contains directories like defaults, vars, tasks, files, templates, meta, handlers.  Some tips for creating good Ansible role patterns An ideal role must have a ‘roles/<role>/task/main.yml’ format, thus specifying the name of the role, it’s tasks, and main.yml. At the beginning of each role, users are advised to check for necessary conditions like the ‘assert’ tasks to inspect if the variables are defined or not. Another prerequisite involves installing packages, using apps on CentOS machines and Yum (the default package manager tool in CentOS) or by using the git checkout.  Templating of files with abstraction is another important factor where variables are defined and put into templates to create the actual config file. Garrett also points out that a template module has a validate parameter which helps the user to check if the config file has any syntax errors. The syntax error can fail the playbook even before deploying the config file. For example, he says, “use Apache with the right parameters to do a con check on the syntax of the file. So that way you never end up with a state where there's a broken configure something there.”  Garrett also recommends putting sensible defaults in the ‘roles/defaults/main.yml’ layout which will make the defaults override the variables on specific cases. He further adds that a role should ideally run in the check mode. Ansible playbook has a --check which basically is “just a dry run” of a user’s complete playbook and --diff will display file or file mode changes in the playbook. Further, he adds that a variable can be defined in the default and in the Var's folder. However, the latter folder is hard to override and should be avoided, warns Garrett. What are some typical anti-patterns in Ansible? The shell and command modules are used in Ansible for executing commands on remote servers. Both modules require command names followed by a list of arguments.  The shell module is used when a command is to be executed in the remote servers in a particular shell. Garrett says that new Ansible users generally end up using the shell or command module in the same way as the wget computer program. According to him, this practice is wrong, since “there's currently I think thousands of three hundred different modules in ansible so there's likely a big chance that whatever you want to do there already a module for that just did that thing.”  He also asserts that these two modules have several problems as the shell module gets interrupted by the actual shells, so if the user has any special variables in the shell string and if their PlayBook is running in the check mode then the shell and the command module won't run.  Another drawback of these modules is that they will always refer back to change while running a command which makes its exit value zero. This means that the user will have to probably get the output and then check if there is any standard error present in it.  Next, Garrett explored some examples to show the alternatives to the shell/command module - the ‘slurp’ module. The slurp module will “slope the whole file and a 64 encoded” and will also enable access to the actual content with ‘path file.contents’. The best thing about this module is that it will never return any change and works great in the check mode. In another example, Garrett showed that when fetching a URL, the shell command ends up getting downloaded every time the playbook runs, thus throwing an error each time. This can again be avoided by using the ‘uri’ module instead of the shell module. The uri module will define the URL every time a file is to be retrieved thus helping the user to write and create a parameter. At the end of the talk, Garrett also threw light on the problems with using the set_facts module and shares its templates. Watch the full video on Youtube. You can also learn all about custom modules, plugins, and dynamic inventory sources in our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. Read More Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Automating OpenStack Networking and Security with Ansible 2 [Tutorial] Why choose Ansible for your automation and configuration management needs? Ten tips to successfully migrate from on-premise to Microsoft Azure Why should you consider becoming ‘AWS Developer Associate’ certified?
Read more
  • 0
  • 0
  • 4842

article-image-ten-tips-to-successfully-migrate-from-on-premise-to-microsoft-azure
Savia Lobo
13 Dec 2019
11 min read
Save for later

Ten tips to successfully migrate from on-premise to Microsoft Azure 

Savia Lobo
13 Dec 2019
11 min read
The decision to start using Azure Cloud Services for your IT infrastructure seems simple. However, to succeed, a cloud migration requires hard work and good planning. At Microsoft Ignite 2018, Eric Berg, an Azure Lead Architect at COMPAREX, a Microsoft MVP Azure + Cloud and Data Center Management, shared ‘Ten tips for a successful migration from on-premises to Azure’, based on their day-to-day learnings. Eric shares known issues, common pitfalls, and best practices to get started. Further Reading To gain a deep understanding of various Azure services related to infrastructure, applications, and environments, you can check out our book Microsoft Azure Administrator – Exam Guide AZ-103 by Sjoukje Zaal. This book is also an effective guide for acquiring the skills needed to pass the Exam AZ-103, with effective mock tests and solutions so that you can confidently crack this exam. Tip #1: Have your Azure Governance Set One needs to have a basic plan of what they are going to do with Azure. Consider Azure Governance as the basis for Cloud Adoption. Berg says, “if you don't have a plan for what you do with Azure, it will hurt you.” To run something on Azure is good, but to keep it secure is the key thing. Here, Governance rule sets help users to audit and figure out if everything is running as expected. One of the key parts of Azure Governance is Networking. Hence one should consider a networking concept that suits both the company and the business. Microsoft is moving really fast; in 2018, to connect to the US and Europe you had to use a VPN then came global v-net peering, and now we have ESRI virtual WAN. Such advancements allow a concept to further grow and always use the top of the edge technologies while adoption of such a rule set enables customers to try a lot of things on their own. Tip #2: Think about different requirements From an IT perspective, every organization wants control, focus on its IT, and also to ensure that everything is compliant. Many organizations also want to write policies in place. On the other hand, the human resource department section wants to be totally agile and innovative and wants to consume services and self-service without feeling the need to communicate with IT. “I've seen so many human resource departments doing their own contracts with external partners building some fancy new hiring platforms and IT didn't know anything about it,” Berg points out. When it comes to Cloud, each and every member of the company should be aware and should be involved. It is simply not just an IT-dependent decision, but is company dependent. Tip #3: Assess your infrastructure Berg says organizations should assess their environment. Migrating your servers as they are to Azure is not the right thing to do. This is because in Azure the decision between 8 and 16 gigabytes of RAM is a decision between 100 and 200 percent of the cost. Hence, right scaling or a good assessment is extremely important and this cannot be achieved by running a script once for 10 minutes and you know what your VMs are doing. Instead, you should at least run an assessment for one month or even three months to see some peaks and some low times. This is like a good assessment where you know what you really need to migrate your systems better. Keep a check on your inventory and also on your contracts to check if you are allowed to migrate your ERP system or CRM system to Azure. As some contracts state that the “deployment of this solution outside of the premises of the company needs some extra contract and some extra cost,” Berg warns. Migrating to Azure is technically easy but difficult from a contract perspective. Also, you should define your needs for migration to a cloud platform. If you don't get value out of your migration don't do it. Berg advises, don't migrate to Azure because everybody does or because it's cool or fancy. Tip #4: Do not rebuild your on-premises structures on Cloud Cloud needs trust. Organizations often try to bring in the old stuff on the on-premises infrastructures such as the external DMZ, the internal DMZ, and also 15 security layers. Berg said they use intune, a cloud-based service in the enterprise mobility management (EMM) space that helps enable your workforce to be productive while keeping your corporate data protected, along with Office 365 on a cloud.  In tune doesn't stick to a DMZ; even if you want to deploy your application or use the latest tech such as BOTS, cognitive services, etc. It may not fit totally into a structured network design on the cloud. On the other hand, there will be disconnected subscriptions, i.e. there will be subscriptions with no connection to your on-premises network. This problem has to be dealt with on a security level. New services need new ways. If you are not agile your IT won't be agile. If you need 16 days or six weeks to deploy a server and you want to stick to those rules and processes, then Azure won't be beneficial for you as there will be no value in it for you. Tip #5: Azure consumption is billed If you spin up a VM that costs $25,000 a month you have to pay for it. The M-series VMs have 128 cores 4 terabytes of RAM and are simply amazing. If you deployed using Windows Server and SQL Server Enterprise, the cost goes up to $58,000 a month for just one VM. When you migrate to Azure and you start integrating new things you probably have to change your own business model. To implement tech such as facial recognition, and others you have to set up a cost management tool for usage tracking. There are many usage APIs and third-party tools available. Proper cost management into the Azure infrastructure helps to divide costs. If you put everything into one subscription, one resource group, where everyone is the owner. Here, the problem won’t be the functioning but you will not be able to figure out who's responsible for what. Instead, a good structure of subscriptions, a good role-based access control, a good tagging policy will help you to figure out cost better. Tip #6: Identity is the new perimeter Azure Ad is the center of everything. To access a user’s data center is not easy these days as it needs access within the premises, then into the data center, then log into the user’s own premises infrastructure. If anyone has a user’s login ID, they are inside the user’s Azure AD, the user’s visa VPN, and also on their on-premises data center. Hence identity is a key part of security. “So, don’t think about using MFA, use MFA. Don't think about using Privileged Identity Management, use it because that's the only way to secure your infrastructure probably and get an insight into who is using what in my infrastructure and how is it going,” Berg warns. In the modern workplace, one can work from anywhere. However, one needs to have proper security levels in place. Secure devices, secure identity, secure access ways to MFA, and so on. Stay cautious. Tip #7: Include your users Users are the most important part of any ecosystem. So, when you migrate servers or the entire on-premise architecture, inform them. What if you have a CRM system fully in the cloud and there's no local cache on the system anymore? This won't fit the needs of your customers or internal customers and this is why organizations should inform them of their plans. They should also ask them what they really need and this will, in turn, help the organizations. Berg illustrated this point with a project in Germany that includes a customer with a very specific project that wanted the product to decrease their response times. The client needs up to two days to answer a customer's email because the project product is very complex and they have a very spread documentation library and it's hard. Their internal goal is to bring down the product response to ten minutes--from two days to 10 minutes. Berg said they considered using a bot, some cognitive services and Azure search, and a plug-in an Outlook. So you get the mail you just search for your product and everything will be figured out. The documentation, the fact sheets, and the standard email template for answering such a thing. The solution proposed was good; both Berg and the IT liked it. However, when the sales team was asked, they said such a solution would steal their jobs. The mistake here was Sales was not included in the process of finding this solution. To rectify this, organizations should include all stakeholders. Focus on benefits, have some key users because they will help you to spread the word over. In the above case, explain and evangelize the sales teams as they are afraid because they don't know and don't understand what happens if you have a bot and some cognitive services to figure out which document is right. This won’t steal their job but instead, help to do better at their job with improved efficiency. Train and educate so they are able to use it, check processes and consider changes. Managed services can help you focus. Back up, monitoring, patching, this is something somebody can do for you. Instead, organizations can now focus on after the migration such as integrating new services, improving right scaling, optimizing cost, optimizing performance, staying up-to-date with all the changes in Azure, etc. Tip #8: Consider Transformation instead of Migration Consider a transformation instead of a migration. Build some logical blocks, don't move an ERP system without your database or the other way around. Berg suggests: To adopt technical and licensing showstoppers define your infrastructure requirements check your compatibility to migrate update helpdesk about SLAs Ask if Azure is really helping me (to figure out or to cover my assets or is it getting better or maybe worse). Tip #9: Keep up to date Continuous learning and continuous knowledge are key to growth. As Azure releases a lot of changes very often, users are notified of these latest updates via emails or via Azure news. Organizations should review their architecture on a regular basis, Berg says. VPN to global v-net peering to Global WAN so that you can change your infrastructure quite fast. Audit your governance not on a yearly basis may be monthly or quarterly. Consider changes fast; don't think two years about a change because then it will not be any more interesting. If there's a new opportunity, grab it, use it and three weeks later probably drop it away. But avoid thinking for two months or more else it will be too late. Tip #10: Plan for the future Do some end to end planning, think about the end-to-end solution; who's using it, what's my back end on this, and so on. Save money and forecast your costs. Keep an eye on resources that probably spread because someone runs the script without knowing what they are doing.  Simply migrating an IIS server with a static website to Azure is not actual cloud migration. Instead, customers should consider moving their servers to a static storage website, to a web app, etc. but not in the Windows VM. Berg concludes by saying that an important migration step is to move from infrastructure. Everybody migrates infrastructure to Azure because that's easy because it's just migrating from one VM to another VM. Customers should not ‘only’ migrate. They should also start an optimization, move forward to platform services, be more agile, think about new ways and most importantly get rid of all on-premise old stuff. Berg adds, “In five years probably nobody will talk about infrastructure as a service anymore because everybody has migrated and optimized it already.” To stay more compliant with corporate standards and SLAs, learn how to configure Azure subscription policies with “Microsoft Azure Administrator – Exam Guide AZ-103” by Packt Publishing. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Azure Functions 3.0 released with support for .NET Core 3.1! Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions
Read more
  • 0
  • 0
  • 8738

article-image-why-should-you-consider-becoming-aws-developer-associate-certified
Savia Lobo
12 Dec 2019
5 min read
Save for later

Why should you consider becoming ‘AWS Developer Associate’ certified?

Savia Lobo
12 Dec 2019
5 min read
Organizations both large and small are looking to automating their day-to-day processes and the best option they consider is moving to the cloud. However, they also fear certain challenges that can make cloud adoption difficult. The biggest challenge is the lack of resources or expertise to understand how different cloud services function or how they are built, to leverage its advantages to the fullest. Many developers use cloud computing services--either through the companies they work with or simply subscribe to it--without really knowing the intricacies. Their knowledge of how the internal processes work remains limited. Certifications, can, in fact, help you understand how cloud functions and what goes on within these gigantic data holders. To start with, enroll yourself into a basic certification by any of the popular cloud service providers. Once you know the basics, you can go ahead to master the other certifications available based on your job role or career aspirations. Why choose an AWS certification Amazon Web Services (AWS) is considered one of the top cloud services providers in the cloud computing market currently. According to Gartner’s Magic Quadrant 2019, AWS continues to lead in public cloud adoption. AWS also offers eleven certifications that include foundational and specialty cloud computing topics. If you are a developer or a professional who wants to pursue a career in Cloud computing, you should consider taking the ‘AWS Certified Developer - Associate’ certification. Do you wish to learn from the AWS subject-matter experts, explore real-world scenarios, and pass the AWS Certified Developer – Associate exam? We recommend you to explore the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. Many organizations use AWS services and being certified can open various options for improved learning. Along with being popular among companies, AWS includes a host of cloud service options compared to other cloud service providers. While having a hands-on experience holds great value for developers, getting certified by one of the most popular cloud services will only have greater advantages for their better future. Starting with web developers, database admins, IoT or an AI developer, etc., AWS includes various certification options that delve into almost every aspect of technology. It is also constantly adding more offerings and innovating in a way that keeps one updated with cutting-edge technologies. Getting an AWS certification is definitely a difficult task but you do not have to quit your current job for this one. Unlike other vendors, Amazon offers a realistic certification path that does not require highly specialized (and expensive) training to start. AWS certifications validate a candidate’s familiarity and knowledge of best practices in cloud architecture, management, and security. Prerequisites for this certification The AWS Developer Associate certification will help you enhance your skills impacting your career growth. However, one needs to keep certain prerequisites in mind. A developer should have: Attended the AWS Essentials course or should have an equivalent experience Knowledge in developing applications with API interfaces Basic understanding of relational and non-relational databases. How the AWS Certified Developer - Associate level certification course helps a developer AWS Certified Developer Associate certification training will give you hands-on exposure to core AWS services through guided lectures, videos, labs and quizzes. You'll get trained in compute and storage fundamentals, architecture and security best practices that are relevant to the AWS certified developer exam. This associate-level course will help developers identify the appropriate AWS architecture and also learn to design, develop, and deploy optimum AWS cloud solutions. If one already has some existing knowledge of AWS, this course will help them identify and deploy secure procedures for optimal cloud deployment and maintenance. Developers will also learn to develop and maintain applications written for Amazon S3, DynamoDB, SQS, SNS, SWS, AWS Elastic Beanstalk, and AWS CloudFormation. After achieving this certification, you will be an asset to any organization. You can help them leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud. This indirectly means a rise in your annual income and also career growth. However, getting certified alone is not enough, other factors such as skills, experience, geographic location, etc. are also important. This certification will help you become competent in using Amazon’s cloud services. This course is a part of the first tier (Associate level) of certifications that AWS offers. You could further improve your cloud computing skills by taking up certifications from the professional tier and later from the specialty tiers, whatever suits you the best. New AWS services and features are added every year. Certification alone is not enough, staying relevant is the key. To continually demonstrate expertise and knowledge of best practices for the most up to date AWS services, certification holders are required to re-certify every two years. You can either choose to take a professional-level exam for the same certification or pass the re-certification exam for your existing certification. To further gain valuable insights on how to design, develop, and deploy cloud-based solutions using AWS and also get familiar with Identity and Access Management (IAM) along with Virtual private cloud (VPC), you can check out the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. How do AWS developers manage Web apps? Why AWS is the preferred cloud platform for developers working with big data How do you become a developer advocate?
Read more
  • 0
  • 0
  • 5002

article-image-aws-reinvent-2019-day-2-highlights-aws-wavelength-provisioned-concurrency-for-lambda-functions-and-more
Savia Lobo
04 Dec 2019
6 min read
Save for later

AWS re:Invent 2019 Day 2 highlights: AWS Wavelength, Provisioned Concurrency for Lambda functions, and more!

Savia Lobo
04 Dec 2019
6 min read
Day 2 of the ongoing AWS re:Invent 2019 conference at Las Vegas, included a lot of new announcements such as AWS Wavelength, Provisioned Concurrency for Lambda functions, Amazon Sagemaker Autopilot, and much more. The Day 1 highlights included a lot of exciting releases too, such as preview of AWS’ new quantum service, Braket; Amazon SageMaker Operators for Kubernetes, among others. Day Two announcements at AWS re:Invent 2019 AWS Wavelength to deliver ultra-low latency applications for 5G devices With AWS Wavelength, developers can build applications that deliver single-digit millisecond latencies to mobile devices and end-users. AWS developers can deploy their applications to Wavelength Zones, AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G networks, and seamlessly access the breadth of AWS services in the region. This enables developers to deliver applications that require single-digit millisecond latencies such as game and live video streaming, machine learning inference at the edge, and augmented and virtual reality (AR/VR). AWS Wavelength brings AWS services to the edge of the 5G network. This minimizes the latency to connect to an application from a mobile device. Application traffic can reach application servers running in Wavelength Zones without leaving the mobile provider’s network. This reduces the extra network hops to the Internet that can result in latencies of more than 100 milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G. To know more about AWS Wavelength, read the official post. Provisioned Concurrency for Lambda Functions To provide customers with improved control over their mission-critical app performance on serverless, AWS introduces Provisioned Concurrency, which is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction. This is a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This addition is helpful for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs. On enabling Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. To know more Provisioned Concurrency in detail, read the official document. Amazon Managed Cassandra Service open preview launched Amazon Managed Apache Cassandra Service (MCS) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Since the Amazon MCS is serverless, you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. With Amazon MCS, it becomes easy to run Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today. Amazon MCS implements the Apache Cassandra version 3.11 CQL API, allowing you to use the code and drivers that you already have in your applications. Updating your application is as easy as changing the endpoint to the one in the Amazon MCS service table. To know more about Amazon MCS in detail, read AWS official blog post. Introducing Amazon SageMaker Autopilot to auto-create high-quality Machine Learning models with full control and visibility The AWS team launched Amazon SageMaker Autopilot to automatically create classification and regression machine learning models with full control and visibility. SageMaker Autopilot first checks the dataset and then runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms, and hyperparameters. All this with a single API call or few clicks in the Amazon SageMaker Studio. Further, it uses this combination to train an Inference Pipeline, which can be easily deployed either on a real-time endpoint or for batch processing. All of this takes place on fully-managed infrastructure. SageMaker Autopilot also generates Python code showing exactly how data was preprocessed: not only can you understand what SageMaker Autopilot does, you can also reuse that code for further manual tuning if you’re so inclined. SageMaker Autopilot supports: Input data in tabular format, with automatic data cleaning and preprocessing, Automatic algorithm selection for linear regression, binary classification, and multi-class classification, Automatic hyperparameter optimization, Distributed training, Automatic instance and cluster size selection. To know more about Amazon Sagemaker Autopilot, read the official document. Announcing ML-powered Amazon Kendra Amazon Kendra is an ML-powered highly accurate enterprise search service. It provides powerful natural language search capabilities to your websites and applications such that end users can easily find the required information within the vast amount of content spread across the organization. Key benefits of Kendra include: Users can get immediate answers to questions asked in natural language. This eliminates sifting through long lists of links and hoping one has the information you need. Kendra lets you easily add content from file systems, SharePoint, intranet sites, file-sharing services, and more, into a centralized location so you can quickly search all of your information to find the best answer. The search results get better over time as Kendra’s machine learning algorithms learn which results users find most valuable. To know more about Amazon Kendra in detail, read the official document. Introducing preview of Amazon Codeguru Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps developers find the most expensive lines of code that affect application performance and causes difficulty while troubleshooting. CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon. It helps developers find and fix code issues such as resource leaks, potential concurrency race conditions, and wasted CPU cycles. To know more about Amazon Codeguru in detail, read the official blog post. A few other highlights of Day two at AWS re:Invent 2019 include: General availability of Amazon EKS on AWS Fargate, AWS Fargate Spot, and ECS Cluster Auto Scaling. The Deep Graph Library, an open source library built for easy implementation of graph neural networks, is now available on Amazon SageMaker. Amazon re:Invent will continue throughout this week till the 6th of December. You can access the Livestream. Keep checking this space for news for further updates and releases. Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 3511
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubecon-cloudnativecon-north-america-2019-highlights-helm-3-0-release-codeready-workspaces-2-0-and-more
Savia Lobo
26 Nov 2019
6 min read
Save for later

KubeCon + CloudNativeCon North America 2019 Highlights: Helm 3.0 release, CodeReady Workspaces 2.0, and more!

Savia Lobo
26 Nov 2019
6 min read
Update: On November 26, the OpenFaaS community released a post including a few of its highlights at KubeCon, San Diego. The post also includes a few highlights from OpenFaaS Cloud, Flux from Weaveworks, Okteto, Dive from Buoyant and k3s going GA. The KubeCon + CloudNativeCon 2019 held at San Diego, North America from 18 - 21 November, witnessed over 12,000 attendees to discuss and improve their education and advancement about containers, Kubernetes and cloud-native. The conference was home to many major announcements including the release of Helm 3.0, Red Hat’s CodeReady Workspaces 2.0, GA of Managed Istio on IBM Cloud Kubernetes Service, and many more. Major highlights at the KubeCon + CloudNativeCon 2019 General availability of Managed Istio on IBM Cloud Kubernetes Service IBM cloud announced that the managed Istio on its cloud Kubernetes service is generally available. This service provides a seamless installation of Istio, automatic updates, lifecycle management of Istio control plane components, and integration with platform logging and monitoring tools. With managed Istio, a user’s service mesh is tuned for optimal performance in IBM Cloud Kubernetes Service. Istio is a service mesh that it is able to provide its features without developers having to make any modifications to their applications. The Istio installation is tuned to perform optimally on IBM Cloud Kubernetes Service and is pre-configured to work out of the box with IBM Log Analysis with LogDNA and IBM Cloud Monitoring with Sysdig. Red Hat announces CodeReady Workspaces 2.0 CodeReady Workspaces 2.0 helps developers to build applications and services similar to the production environment, i.e all apps run on Red Hat OpenShift. A few new services and tools in the CodeReady Workspaces 2.0 include: Air-gapped installs: These enable CodeReady Workspaces to be downloaded, scanned and moved into more secure environments when access to the public internet is limited or unavailable. It doesn’t "call back" to public internet services. An updated user interface: This brings an improved desktop-like experience to developers. Support for VSCode extensions: This gives developers access to thousands of IDE extensions. Devfile: A sharable workspace configuration that specifies everything a developer needs to work, including repositories, runtimes, build tools and IDE plugins, and is stored and versioned with the code in Git. Production consistent containers for developers: This clones the sources in where needed and adds development tools (such as debuggers, language servers, unit test tools, build tools) as sidecar containers so that the running application container mirrors production. Brad Micklea, vice president of Developer Tools, Developer Programs, and Advocacy, Red Hat, said, “Red Hat is working to make developing in cloud native environments easier offering the features developers need without requiring deep container knowledge. Red Hat CodeReady Workspaces 2 is well-suited for security-sensitive environments and those organizations that work with consultants and offshore development teams.” To know more about CodeReady Workspaces 2.0, read the press release on the Red Hat official blog. Helm 3.0 released Built upon the success of Helm 2, the internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change in Helm 3.0 is the removal of Tiller. A rich set of new features have been added as a result of the community’s input and requirements. A few features include: Improved Upgrade strategy: Helm 3 uses 3-way strategic merge patches Secrets as the default storage driver Go import path changes Validating Chart Values with JSONSchema Some features have been deprecated or refactored in ways that make them incompatible with Helm 2. Some new experimental features have also been introduced, including OCI support. Also, the Helm Go SDK has been refactored for general use. The goal is to share and re-use code open sourced with the broader Go community. To know more about Helm 3.0 in detail, read the official blog post. AWS, Intuit and WeaveWorks Collaborate on Argo Flux Recently, Weaveworks announced a partnership with Intuit to create Argo Flux, a major open-source project to drive GitOps application delivery for Kubernetes via an industry-wide community. Argo Flux combines the Argo CD project led by Intuit with the Flux CD project driven by Weaveworks, two well known open source tools with strong community support. At KubeCon, AWS announced that it is integrating the GitOps tooling based on Argo Flux in Elastic Kubernetes Service and Flagger for AWS App Mesh. The collaboration resulted in a new project called GitOps Engine to simplify application deployment in Kubernetes. The GitOps Engine will be responsible for the following functionality: Access to Git repositories Kubernetes resource cache Manifest Generation Resources reconciliation Sync Planning To know more about this collaboration in detail, read the GitOps Engine page on GitHub. Grafana Labs announces general availability of Loki 1.0 Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Read about Loki 1.0 on GitHub to know more in detail. Rancher Extends Kubernetes to the Edge with the general availability of K3s Rancher, creator of the vendor-agnostic and cloud-agnostic Kubernetes management platform, announced the general availability of K3s, a lightweight, certified Kubernetes distribution purpose-built for small footprint workloads. Rancher partnered with ARM to build a highly optimized version of Kubernetes for the edge. It is packaged as a single <40MB binary with a small footprint which reduces the dependencies and steps needed to install and run Kubernetes in resource-constrained environments such as IoT and edge devices. To know more about this announcement in detail, read the official press release. There were many additional announcements including Portworx launched PX-Autopilot, Huawei presented their latest advances on KubeEdge, Diamanti Announced Spektra Hybrid Cloud Solution, and may more! To know more about all the keynotes and tutorials in KubeCon North America 2019, visit its GitHub page. Chaos engineering comes to Kubernetes thanks to Gremlin “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!
Read more
  • 0
  • 0
  • 4372

article-image-microsoft-launches-open-application-model-oam-and-dapr-to-ease-developments-in-kubernetes-and-microservices
Vincy Davis
17 Oct 2019
5 min read
Save for later

Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices

Vincy Davis
17 Oct 2019
5 min read
Yesterday, Microsoft announced the launch of two new open-source projects- Open Application Model (OAM) and DAPR. OAM, developed by Microsoft and Alibaba Cloud under the Open Web Foundation, is a specification that enables the developer to define a coherent model to represent an application. The Dapr project, on the other hand, will allow developers to build portable microservice applications using any language and framework for a new or existing code. Open Application Model (OAM) In OAM, an application is made of many components like a MySQL database or a replicated PHP server with a corresponding load balancer. These components are further used to build an application, thus enabling the platform architects to utilize reusable components for the easy building of reliable applications. OAM will also empower the application developers to separate the application description from the application deployment details, allowing them to focus on the key elements of their application, instead of its operational details. Microsoft also asserted that OAM consists of unique characteristics like platform agnostic. The official blog states, “While our initial open implementation of OAM, named Rudr, is built on top of Kubernetes, the Open Application Model itself is not tightly bound to Kubernetes. It is possible to develop implementations for numerous other environments including small-device form factors, like edge deployments and elsewhere, where Kubernetes may not be the right choice. Or serverless environments where users don’t want or need the complexity of Kubernetes.” Another important feature of OAM is its design extensibility. OAM also enables the platform providers to expose the unique characteristics of their platform through the trait system which will help them to build cross-platform apps wherever the necessary traits are supported. In an interview with TechCrunch, Microsoft Azure CTO Mark Russinovich said that currently, Kubernetes is “infrastructure-focused” and does not provide any resource to build a relationship between the objects of an application. Russinovich believes that OAM will solve the problem that many developers and ops teams are facing today. Commenting on the cooperation with Alibaba Cloud on this specification, Russinovich observed that both the companies encountered the same problems when they talked to their customers and internal teams. He further said that over time, Alibaba Cloud will launch a managed service based on OAM, and chances are that Microsoft will do the same over time. The Dapr project for building microservice applications This is an alpha release of Dapr with an event-driven runtime to help developers build resilient, microservice stateless and stateful applications for the cloud and edge. It also allows the application to be built using any programming language and developer framework. “In addition, through the open source project, we welcome the community to add new building blocks and contribute new components into existing ones. Dapr is completely platformed agnostic, meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables developers to build microservice applications that can run on both the cloud and edge with no code changes,” stated the official blog. Image Source: Microsoft APIs in Dapr are exposed as a sidecar architecture (either as a container or as a process) and does not require the application code to include any Dapr runtime code. This simplifies Dapr integration from other runtimes, as well as provides separate application logic for improved supportability. Image Source: Microsoft Building blocks of Dapr Resilient service-to-service invocation: It enables method calls, including retries, on remote services wherever they are running in the supported hosting environment. State management for key/value pairs: This allows long-running, highly available, stateful services to be easily written, alongside stateless services in the same application. Publish and subscribe messaging between services: It enables event-driven architectures to simplify horizontal scalability and makes them resilient to failure. Event-driven resource bindings: This helps in building event-driven architectures for scale and resiliency by receiving and sending events to and from any external resources such as databases, queues, file systems, blob stores, webhooks, etc. Virtual actors: This is a pattern for stateless and stateful objects that makes concurrency simple with method and state encapsulation. Dapr also provides state, life-cycle management for actor activation/deactivation and timers and reminders to wake up actors. Distributed tracing between services: It enables easy diagnose and inter-service calls in production using the W3C Trace Context standard. It also allows push events for tracing and monitoring systems. Users have liked both the opensource projects, especially Dapr. A user on Hacker News comments, “I'm excited by Dapr! If I understand it correctly, it will make it easier for me to build applications by separating the "plumbing" (stateful & handled by Dapr) from my business logic (stateless, speaks to Dapr over gRPC). If I build using event-driven patterns, my business logic can be called in response to state changes in the system as a whole. I think an example of stateful "plumbing" is a non-functional concern such as retrying a service call or a write to a queue if the initial attempt fails. Since Dapr runs next to my application as a sidecar, it's unlikely that communication failures will occur within the local node.” https://twitter.com/stroker/status/1184810311263629315 https://twitter.com/ThorstenHans/status/1184513427265523712 The new WebSocket Inspector will be released in Firefox 71 Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia What to expect from D programming language in the near future An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements
Read more
  • 0
  • 0
  • 5994

article-image-what-to-expect-at-cloud-data-summit-2019-a-summit-hosted-in-the-cloud
Sugandha Lahoti
09 Oct 2019
2 min read
Save for later

What to expect at Cloud Data Summit 2019 - a summit hosted in the cloud

Sugandha Lahoti
09 Oct 2019
2 min read
2019’s Cloud Data Summit is quickly approaching (scheduled to take place on October 16th-17th). This year it will be different; it will be hosted online as a 100% virtual summit. This event will feature industry-leading speakers and thought leaders talking about the hype of AI, big data, machine learning, PaaS and IaaS technologies.  Although it will be a 100% virtual summit, this conference will have all the features of a standard conference - main stage discussions, speaker panels, peer networking sessions, roundtables, breakout sessions, and group lunches. All topics will be presented in a way that is comfortable for both technical and non-technically inclined attendees and will cover real-world implementations, how-to’s, best practices, potential pitfalls, and how to leverage the full potential of the cloud’s data and processing power. Attendees of the Cloud Data Summit include Google, Spotify, IBM, SAP, Microsoft, Apple and more.   Here’s a list of featured speakers: Jay Natarajan, US AI Lead, Microsoft, Lead Architect, Microsoft Dan Linstedt, Inventor of Data Vault Methodology CEO of LearnDataVault.com CTO and Co-Founder of Scalefree. T. Scott Clendaniel, Co-founder & Consultant, Cottrell Consulting Barr Moses, Co-founder & CEO of Monte Carlo Data Dr. Joe Perez, Sr. Systems Analyst, Team Lead NC Department of Health and Human Services Kurt Cagle, CEO of Semantical LLC, Contributor to Forbes and Managing Editor of Cognitive World Joshua Cottrell, Co-founder & Consultant Cottrell Consulting Jawad Sartaj, Chief Analytics Officer Somos Community Care Daniel O’Connor, Head of Product Data Practice Aware Web Solutions Inc. Eric Axelrod, Founder of Cloud Data Summit, President & Chief Architect, DIGR, and Executive Advisor Those individuals or organizations interested in learning more about the Cloud Data Summit or to register for attendance can visit their official website. If you fall in any of these categories - Business Executives, Data and IT Executives, Data Managers, Data Scientists, Data Engineers, Data Warehouse Architects  (so, anyone who is interested in learning about cloud migration and its consequences) -, Cloud Data Summit is not to be missed. For current students and new graduates, tickets are up to 80% off via the special student registration form.
Read more
  • 0
  • 0
  • 2400

article-image-cloudflare-terminates-services-to-8chan-following-yet-another-set-of-mass-shootings-in-the-us-tech-awakening-or-liability-avoidance
Sugandha Lahoti
06 Aug 2019
9 min read
Save for later

Cloudflare terminates services to 8chan following yet another set of mass shootings in the US. Tech awakening or liability avoidance?

Sugandha Lahoti
06 Aug 2019
9 min read
Update: Jim Watkins, the owner of 8chan has spoken against the ongoing backlash in a defensive video statement on uploaded 6th August on YouTube. "My company takes a firm stand in helping law enforcement and within minutes of these two tragedies, we were working with FBI agents to find out what information we could to help in their investigations. There are about 1 million users of 8chan. 8chan is an empty piece of paper for writing on it is disturbing to me that it can be so easily shut down. Over the weekend the domain name service for 8chan was abruptly terminated by the provider Cloudflare.", he states in the video. He adds, "First of all the El Paso shooter posted on Instagram, not 8chan. Later someone uploaded a manifesto; however, that manifesto was not uploaded by the Walmart shooter. It is unfortunate that this place of free speech has temporarily been removed we are working to restore service. It is clearly a political move to remove 8chan from CloudFlare; it has dispersed a peacefully assembled group of people. " Watkins went on to call Cloudflare's decision 'cowardly'. He said, "Contrary to the unfounded claim by Mr. Prince of CloudFlare 8-chan is a lawful community abiding by the laws of the United States and enforced in the Ninth Circuit Court. His accusation has caused me tremendous damage. In the meantime, I wish his company the best and hold no animosity towards him or his cowardly and not thought-out actions against 8-chan." Saturday witnessed two horrific mass shooting tragedies, one when a maniac gunman shot at least 20 people at a sprawling Walmart shopping complex in El Paso, Texas. The other in Dayton, Ohio at the entrance of Ned Peppers Bar where ten people were killed, including the perpetrator, and at least 27 others were injured. The gunman in the El Paso shooting has been identified as Patrick Crusius according to CNN sources. He appears to have been inspired by the online forum known as 8chan. 8chan is an online message board which is home to online extremists who share racist and anti-Semitic conspiracy theories. According to police officials, a four-page document was posted to 8chan, 20 minutes before the shootings that they believe was written by Crusius. The post said, "I'm probably going to die today." His post blamed white nationalists and immigrants for taking away jobs and spewed racist hatred towards immigrants and Hispanics. The El Paso post is not the only incident. 8chan has been filled with unmoderated violent and extremist content over time. Nearly the same thing happened on 8chan before the terror attack in Christchurch, New Zealand. In his post, the El Paso shooter referenced the Christchurch incident saying he was inspired by the Christchurch content on 8chan which glorified the previous massacre. The suspected killer in the synagogue shootings in Poway, California also posted a hate-filled “open letter” on 8chan. In March, this year Australian telecom company Telstra denied access to millions of Australians to the websites 4chan, 8chan, Zero Hedge, and LiveLeak as a reaction to the Christchurch mosque shootings. Cloudflare first defends 8chan citing ‘moral obligations’ but later cuts all ties Post this disclosure, Cloudflare, that provides internet infrastructure services to 8chan continued to defend hosting 8chan calling it their 'moral obligation' to provide 8chan their services. Keeping 8chan within its network is a “moral obligation”, said Cloudflare, adding: “We, as well as all tech companies, have an obligation to think about how we solve real problems of real human suffering and death. What happened in El Paso today is abhorrent in every possible way, and it’s ugly, and I hate that there’s any association between us and that … For us, the question is which is the worse evil? Is the worse evil that we kick the can down the road and don’t take responsibility? Or do we get on the phone with people like you and say we need to own up to the fact that the internet is home to many amazing things and many terrible things and we have an absolute moral obligation to deal with that.” https://twitter.com/slpng_giants/status/1158214314198745088 https://twitter.com/iocat/status/1158218861658791937 Cloudflare has been under the spotlight over the past few years for continuing to work with websites that foster hate. Previous to 8chan, in 2017, Cloudflare had to discontinue services to neo-Nazi blog, The Daily Stormer, after the terror at Charlottevelle. However, Daily Stormer continues to run today having moved to a different infrastructure service with allegedly more readers than ever. After an intense public and media backlash over the weekend, Cloudflare announced that it would completely stop providing support for 8chan. Cloudflare is also readying for an initial public offering in September which may have been the reason why they cut ties with 8chan. In a blog post today, they explained the decision to cut off 8chan. "We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time. The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths." Cloudflare has also cut off 8chan's access to its DDOS protection service. Although, this will have a short term impact; 8chan can always come up with another cloud partner and resume operations. Cloudflare acknowledges it as well, “While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet’s.” The company added, “We feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” adding that this is not “due to some conception of the United States’ First Amendment,” since Cloudflare is a private company (and most of its customers, and more than half of its revenue, are outside the United States). Instead, Cloudflare “will continue to engage with lawmakers around the world as they set the boundaries of what is acceptable in those countries through due process of law. And we will comply with those boundaries when and where they are set.” Founder of 8chan wants the site to be shut off 8chan founder Fredrick Brennan also appreciated Cloudfare’s decision to block the site. Post the gruesome El Paso shootings, he also told the Washington Post that the site’s owners should “do the world a favor and shut it off.” However, he told Buzzfeed News, shutting down 8chan wouldn't stop the extremism we're now seeing entirely, but it would make it harder for them to organize. https://twitter.com/HW_BEAT_THAT/status/1158194175755485191 In a March interview with The Wall Street Journal, he expressed his regrets over his role in the site’s creation and warned that the violent culture that had taken root on 8chan’s boards could lead to more mass shootings. Brennan founded the site in 2011 and announced his departure from the company in July 2016. 8Chan is owned by Jim Watkins and run by his son, Ron. He posted on Twitter that 8chan will be moving to another service ASAP. He has also resisted calls to moderate or shut down the site. On Sunday, a banner at the top of 8chan’s home page read, “Welcome to 8chan, the Darkest Reaches of the Internet.” https://twitter.com/CodeMonkeyZ/status/1158202303096094720 Cloudflare acted too late, too little Cloudflare's decision to simply block 8chan was not seen as an adequate response by some who say Cloudflare should have acted earlier. 8chan has been known for enabling child pornography in 2015 and as a result, was removed from Google Search. Coupled with the Christchurch mosque and the Poway synagogue shootings earlier in the year, there was increased pressure on those providing 8chan's Internet and financial service infrastructures to terminate their support. https://twitter.com/BinaryVixen899/status/1158216197705359360 Laurie Voss, the cofounder of npmjs, called out Cloudflare and subsequently, other content sites (Facebook, Twitter) for shirking responsibility under the guise of them being infrastructure companies and therefore cannot enforce content standards. https://twitter.com/seldo/status/1158204950595420160 https://twitter.com/seldo/status/1158206331662323712 “Facebook, Twitter, Cloudflare, and others pretend that they can't. They can. They just don't want to.” https://twitter.com/seldo/status/1158206867438522374 “I am super, super tired of companies whose profits rely on providing maximum communication with minimum moderation pretending this is some immutable law and not just the business model they picked,” he tweeted. Others also agreed that Cloudflare’s statement eschews responsibility. https://twitter.com/beccalew/status/1158196518983045121 https://twitter.com/slpng_giants/status/1158214314198745088 Voxility, 8chan’s hardware provider also bans the site Web services company Voxility has also banned 8chan and it’s new host Epik, which had been leasing web space from it. Epik’s website remains accessible, but 8chan now returns an error message. “As soon as we were notified of the content that Epik was hosting, we made the decision to totally ban them,” Voxility business development VP Maria Sirbu told The Verge. Sirbu said it was unlikely that Voxility would work with Epik again. “This is the second situation we’ve had with the reseller and this is not tolerable,” she said. https://twitter.com/alexstamos/status/1158392795687575554 Does de-platforming even work? De-platforming or banning people that spread extremist or banning these people is not a solution since they will eventually migrate to other platforms and still able to circulate their ideology. Closing 8chan is not the solution to the bigger problem of controlling racism and extremism. Closing one 8chan will sprout another 20chan. “8chan is no longer a refuge for extremist hate — it is a window opening onto a much broader landscape of racism, radicalization, and terrorism. Shutting down the site is unlikely to eradicate this new extremist culture because 8chan is anywhere. Pull the plug, it will appear somewhere else, in whatever locale will host it. Because there's nothing particularly special about 8chan, there are no content algorithms, hosting technology immaterial. The only thing radicalizing 8chan users are other 8chan users.”, Ryan Broderick from Buzzfeed wrote. A group of users told BuzzFeed that it’s now common for large 4chan threads to migrate over into Discord servers before the 404. After Cloudflare, Amazon is beginning to face public scrutiny as 8chan’s operator Jim Watkins sells audiobooks on Amazon.com and Audible. https://twitter.com/slpng_giants/status/1158213239697747968 Facebook will ban white nationalism, and separatism content in addition to white supremacy content. 8 tech companies and 18 governments sign the Christchurch Call to curb online extremism; the US backs off. How social media enabled and amplified the Christchurch terrorist attack
Read more
  • 0
  • 0
  • 2029
article-image-businesses-need-to-learn-how-to-manage-cloud-costs-to-get-real-value-from-serverless-and-machine-learning-as-a-service
Richard Gall
10 Jun 2019
7 min read
Save for later

Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service

Richard Gall
10 Jun 2019
7 min read
This year’s Skill Up survey threw a spotlight on the challenges developers and engineering teams face when it comes to cloud. Indeed, it even highlighted the extent to which cloud is still a nascent trend for many developers, even though it feels so mainstream within the industry - almost half of respondents aren’t using cloud at all. But for those that do use cloud, the survey results also illustrated some of the specific ways that people are using or plan to use cloud platforms, as well as highlighting the biggest challenges and mistakes organisations are making when it comes to cloud. What came out as particularly important is that the limitations and the opportunities of cloud must be thought of together. With our research finding that cost only becomes important once a cloud platform is being used, it’s clear that if we’re to successfully - and cost effectively - use the cloud platforms we do, understanding the relationship with cost and opportunity over a sustained period of time (rather than, say, a month) is absolutely essential. As one of our respondents told us “businesses are still figuring out how to leverage cloud computing for their business needs and haven't quite got the cost model figured out.” Why does cost pose such a problem when it comes to cloud computing? In this year’s survey, we asked people what their primary motivations for using cloud are. The key motivators were use case and employment (ie. the decision was out of the respondent’s hands), but it was striking to see cost as only a minor consideration. Placed in the broader context of discussions around efficiency and a tightening global market, this seemed remarkable. It appears that people aren’t entering the cloud marketplace with cost as a top consideration. In contrast however, this picture changes when we asked respondents about the biggest limiting factors for their chosen cloud platforms. At this point, cost becomes a much more important factor. This highlights that the reality of cloud costs only become apparent - or rather, becomes more apparent - once a cloud platform is implemented and being used. From this we can infer that there is a lack of strategic planning in cloud purchasing. It’s almost as if technology leaders are falling into certain cloud platforms based on commonplace assumptions about what’s right. This then has consequences further down the line. We need to think about cloud cost and functionality together The fact that functionality is also a key limitation is also important to note here - in fact, it is actually closely tied up with cost, insofar as the functionality of each respective cloud platform is very neatly defined by its pricing structure. Take serverless, for example - although it’s typically regarded as something that can be cost-effective for organizations, it can prove costly when you start to scale workloads. You might save more money simply by optimizing your infrastructure. What this means in practice is that the features you want to exploit within your cloud platform should be approached with a clear sense of how it’s going to be used and how it’s going to fit in the evolution of your business and technology in the medium and long term future. Getting the most from leading cloud trends There were two distinct trends that developers identified as the most exciting: machine learning and serverless. Although both are very different, they both hold a promise of efficiency. Whether that’s the efficiency in moving away from traditional means of hosting to cloud-based functions to powerful data processing and machine-led decision making at scale, the fundamentals of both trends are about managing economies of scale in ways that would have been impossible half a decade ago. This plays into some of the issues around cost. If serverless and machine learning both appear to offer ways of saving on spending or radically driving growth, when that doesn’t quite turn out in the way technology purchasers expected it would, the relationship between cost and features can become a little bit strained. Serverless The idea that serverless will save you money is popular. And in general, it is inexpensive. The pricing structures of both AWS and Azure make Functions as a Service (FaaS) particularly attractive. It means you’ll no longer be spending money on provisioning compute resources you don’t actually need, with your provider managing the necessary elasticity. Read next: The Future of Cloud lies in revisiting the designs and limitations of today’s notion of ‘serverless computing’, say UC Berkeley researchers However, as we've already seen, serverless doesn't guarantee cost efficiency. You need to properly understand how you're going to use serverless to ensure that it's not costing you big money without you realising it. One way of using it might be to employ it for very specific workloads, allowing you to experiment in a relatively risk-free manner before employing it elsewhere - whatever you decide, you must ensure that the scope and purpose of the project is clear. Machine learning as a Service Machine learning - or deep learning in particular - is very expensive to do. This is one of the reasons that machine learning on cloud - machine learning as a service - is one of the most attractive features of many cloud platforms. But it’s not just about cost. Using cloud-based machine learning tools also removes some of the barriers to entry, making it easier for engineers who don’t necessarily have extensive training in the field to actually start using machine learning models in various ways. However, this does come with some limitations - and just as with serverless, you really do need to understand and even visualize how you’re going to use machine learning to ensure that you’re not just wasting time and energy with machine learning cloud features. You need to be clear about exactly how you’re going to use machine learning, what data you’re going to use, where it’s going to be stored, and what the end result should look like. Perhaps you want to embed machine learning capabilities inside an app? Or perhaps you want to run algorithms on existing data to inform internal decisions? Whatever it is, all these questions are important. These types of questions will also impact the type of platform you select. Google’s Cloud Platform is far and away the go-to platform for machine learning (this is one of the reasons why so many respondents said their motivation for using it was use case), but bear in mind that this could lead to some issues if the bulk of your data is typically stored on, say, AWS - you’ll need to build some kind of integration, or move your data to GCP (which is always going to be a headache). The hidden costs of innovation These types of extras are really important to consider when it comes to leveraging exciting cloud features. Yes you need to use a pricing calculator and spend time comparing platforms, but factoring additional development time to build integrations or move things is something that a calculator clearly can’t account for. Indeed, this is true in the context of both machine learning and serverless. The organizational implications of your purchases are perhaps the most important consideration and one that’s often the easiest to miss. Control the scope and empower your team However, although the organizational implications aren’t necessarily problems to be resolved - they could well be opportunities that you need to embrace. You need to prepare and be ready for those changes. Ultimately, preparation is key when it comes to leveraging the benefits of cloud. Defining the scope is critical and to do that you need to understand what your needs are and where you want to get to. That sounds obvious, but it’s all too easy to fall into the trap of focusing on the possibilities and opportunities of cloud without paying careful consideration to how to ensure it works for you. Read the results of Skill Up 2019. Download the report here.
Read more
  • 0
  • 0
  • 3280

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5380

article-image-google-cloud-next19-day-1-open-source-partnerships-hybrid-cloud-platform-cloud-run-and-more
Bhagyashree R
10 Apr 2019
6 min read
Save for later

Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more

Bhagyashree R
10 Apr 2019
6 min read
Google Cloud Next’ 19 kick started yesterday in San Francisco. On day 1 of the event, Google showcased its new tools for application developers, its partnership with open-source companies, and outlined its strategy to make a mark in the Cloud industry, which is currently dominated by Amazon and Microsoft. Here’s the rundown of the announcements Google made yesterday: Google Cloud’s new CEO is set to expand its sales team Cloud Next’19 is the first event where the newly-appointed Google Cloud CEO, Thomas Kurian took on stage to share his plans for Google Cloud. He plans to make Google Cloud "the best strategic partner" for organizations modernizing their IT infrastructure. To step up its game in the Cloud industry, Google needs to put more focus on understanding its customers, providing them better support, and making it easier for them to conduct business. This is why Kurian is planning to expand the sales team and add more technical specialists. Kurian, who joined Google after working at Oracle for 22 years, also shared that the team is rolling out new contracts to make contracting easier and also promised simplified pricing. Anthos, Google’s hybrid cloud platform is coming to AWS and Azure During the opening keynote, Sundar Pichai, Google’s CEO confirmed the rebranding of Cloud Services Platform, a platform for building and managing hybrid applications, as it enters general availability. This rebranded version named Anthos provides customers a single managed service, which is not limited to just Google-based environments and comes with extended support for Amazon Web Services (AWS) and Azure. With this extended support, Google aims to provide organizations that have multi-cloud sourcing strategy a more consistent experience across all three clouds. Urs Hölzle, Google’s Senior Vice President for Technical Infrastructure, shared in a press conference, “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three clouds — and actually on-premise environments, too — look the same.” Along with this extended support, another plus point of Anthos is that it is hardware agnostic, which means customers can run the service on top of their current hardware without having to immediately invest in new servers. It is a subscription-based service, with prices starting at $10,000/month per 100 vCPU block. Google also announced the first beta release of Anthos Migrate, a service that auto-migrates VMs from on-premises, or other clouds, directly into containers in Google Kubernetes Environment (GKE) with minimum effort. Explaining the advantage of this tool, Google wrote in a blog post, “Through this transformation, your IT team is free from managing infrastructure tasks like VM maintenance and OS patching, so it can focus on managing and developing applications.” Google Cloud partners with top open-source projects challenging AWS Google has partnered with several top open-source data management and analytics companies including Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs. The services and products provided by these companies will be deeply integrated into the Google Cloud Platform. With this integration, Google aims to provide customers a seamless experience by allowing them to use these open source technologies at a single place, Google Cloud. These will be managed services and the invoicing and billing of these services will be handled by Google Cloud. Customer support will also be the responsibility of Google so that users manage and log tickets across all of these services via a single platform. Google’s approach of partnering with these open source companies is quite different from that of other cloud providers. Over the past few years, we have come across cases where cloud providers sell open-source projects as service, often without giving any credits to the original project. This led to companies revisiting their open-source licenses to stop such behavior. For instance, Redis adopted the Common Clause license for its Redis Modules and later dropped its revised license in February. Similarly, MongoDB, Neo4j, and Confluent also embraced a similar strategy. Kurian said, “In order to sustain the company behind the open-source technology, they need a monetization vehicle. If the cloud provider attacks them and takes that away, then they are not viable and it deteriorates the open-source community.” Cloud Run for running stateless containers serverlessly Google has combined serverless computing and containerization into a single product called Cloud Run. Yesterday, Oren Teich, Director Product Management for Serverless, announced the beta release of Cloud Run and also explained how it works. Cloud Run is a managed compute platform for running stateless containers that can be invoked via HTTP requests. It is built on top of Knative, a Kubernetes-based platform for building, deploying, and managing serverless workloads. You get two options to choose from, either you can run your containers fully-managed with Cloud Run or in your Google Kubernetes Engine cluster with Cloud Run on GKE. Announcing the release of Cloud Run, Teich wrote in a blog post, “Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and it’s end-to-end managed.” Google releases closed source VS Code plugin Google announced the beta release of “Cloud Code for VS Code” as a closed source library. It allows you to extend the VS Code to bring the convenience of IDEs to developing cloud-native Kubernetes applications. This extension aims to speed up the builds, deployment, and debugging cycles. You can deploy your applications to either local clusters or across multiple cloud providers. Under the hood, Cloud Code for VS Code uses Google’s popular command-line tools such as skaffold and kubectl, to provide users continuous feedback as they build their projects. It also supports deployment profiles that lets you define different environments to make testing and debugging easier on your workstation or in the cloud. Cloud SQL now supports PostgreSQL 11.1 Beta Cloud SQL is Google’s fully-managed database service that makes it easier to set up, maintain, manage, and administer your relational databases on GCP. It now comes with support for PostgreSQL 11.1 Beta. Along with that, it supports the following relational databases: MySQL 5.5, 5.6, and 5.7 PostgreSQL 9.6 Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Podcasts is transcribing full podcast episodes for improving search results  
Read more
  • 0
  • 0
  • 2143
article-image-chef-goes-open-source-ditching-the-loose-open-core-model
Richard Gall
02 Apr 2019
5 min read
Save for later

Chef goes open source, ditching the Loose Open Core model

Richard Gall
02 Apr 2019
5 min read
Chef, the infrastructure automation tool, has today revealed that it is going completely open source. In doing so, the project has ditched the loose open core model. The news is particularly intriguing as it comes at a time when the traditional open source model appears to be facing challenges around its future sustainability. However, it would appear that from Chef's perspective the switch to a full open source license is being driven by a crowded marketplace where automation tools are finding it hard to gain a foothold inside organizations trying to automate their infrastructure. A further challenge for this market is what Chef has identified as 'The Coded Enterprise' - essentially technologically progressive organizations driven by an engineering culture where infrastructure is primarily viewed as code. Read next: Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Why is Chef going open source? As you might expect,  there's actually more to Chef's decision than pure commercialism. To get a good understanding, it's worth picking apart Chef's open core model and how this was limiting the project. The limitations of Open Core The Loose Open Core model has open source software at its center but is wrapped in proprietary software. So, it's open at its core, but is largely proprietary in how it is deployed and used by businesses. While at first glance this might make it easier to monetize the project, it also severely limits the projects ability to evolve and develop according to the needs of people that matter - the people that use it. Indeed, one way of thinking about it is that the open core model positions your software as a product - something that is defined by product managers and lives and dies by its stickiness with customers. By going open source, your software becomes a project, something that is shared and owned by a community of people that believe in it. Speaking to TechCrunch, Chef Co-Founder Adam Jacob said "in the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect... the value was always in the totality of the product." Read next: Chef Language and Style Removing the friction between product and project Jacob published an article on Medium expressing his delight at the news. It's an instructive look at how Chef has been thinking about itself and the challenges it faces. "Deciding what’s in, and what’s out, or where to focus, was the hardest part of the job at Chef," Jacob wrote. "I’m stoked nobody has to do it anymore. I’m stoked we can have the entire company participating in the open source community, rather than burning out a few dedicated heroes. I’m stoked we no longer have to justify the value of what we do in terms of what we hold back from collaborating with people on." So, what's the deal with the Chef Enterprise Automation Stack? As well as announcing that Chef will be open sourcing its code, the organization also revealed that it was bringing together Chef Automate, Chef Infra, Chef InSpec, Chef Habitat and Chef Workstation under one single solution: the Chef Enterprise Automation Stack. The point here is to simplify Chef's offering to its customers to make it easier for them to do everything they can to properly build and automate reliable infrastructure. Corey Scobie, SVP of Product and Engineering said that "the introduction of the Chef Enterprise Automation Stack builds on [the switch to open source]... aligning our business model with our customers’ stated needs through Chef software distribution, services, assurances and direct engagement. Moving forward, the best, fastest, most reliable way to get Chef products and content will be through our commercial distributions.” So, essentially the Chef Enterprise Automation Stack will be the primary Chef distribution that's available commercially, sitting alongside the open source project. What does all this mean for Chef customers and users? If you're a Chef user or have any questions or concerns, the team have put together a very helpful FAQ. You can read it here. The key points for Chef users Existing commercial and non-commercial users don't need to do anything - everything will continue as normal. However, anyone else using current releases should be aware that support will be removed from those releases in 12 months time. The team have clarified that "customers who choose to use our new software versions will be subject to the new license terms and will have an opportunity to create a commercial relationship with Chef, with all of the accompanying benefits that provides." A big step for Chef - could it help determine the evolution of open source? This is a significant step for Chef and it will be of particular interest to its users. But even for those who have no interest in Chef, it's nevertheless a story that indicates that there's a lot of life in open source despite the challenges it faces. It'll certainly interesting to see whether Chef makes it work and what impact it has on the configuration management marketplace.
Read more
  • 0
  • 0
  • 2029

article-image-top-reasons-why-businesses-should-adopt-enterprise-collaboration-tools
Guest Contributor
05 Mar 2019
8 min read
Save for later

Top reasons why businesses should adopt enterprise collaboration tools

Guest Contributor
05 Mar 2019
8 min read
Following the trends of the modern digital workplace, organizations apply automation even to the domains that are intrinsically human-centric. Collaboration is one of them. And if we can say that organizations have already gained broad experience in digitizing business processes while foreseeing potential pitfalls, the situation is different with collaboration. The automation of collaboration processes can bring a significant number of unexpected challenges even to those companies that have tested the waters. State of Collaboration 2018 reveals a curious fact: even though organizations can be highly involved in collaborative initiatives, employees still report that both they and their companies are poorly prepared to collaborate. Almost a quarter of respondents (24%) affirm that they lack relevant enterprise collaboration tools, while 27% say that their organizations undervalue collaboration and don't offer any incentives for them to support it. Two reasons can explain these stats: The collaboration process can be hardly standardized and split into precise workflows. The number of collaboration scenarios is enormous, and it’s impossible to get them all into a single software solution. It’s also pretty hard to manage collaboration, assess its effectiveness, or understand bottlenecks. Unlike business process automation systems that play a critical role in an organization and ensure core production or business activities, enterprise collaboration tools are mostly seen as supplementary solutions, so they are the last to be implemented. Moreover, as organizations often don’t spend much effort on adapting collaboration tools to their specifics, the end solutions are frequently subject to poor adoption. At the same time, the IT market offers numerous enterprise collaboration tools Slack, Trello, Stride, Confluence, Google Suite, Workplace by Facebook, SharePoint and Office 365, to mention a few, compete to win enterprises’ loyalty. But how to choose the right enterprise Collaboration tools and make them effective? Or how to make employees use the implemented enterprise Collaboration tools actively? To answer these questions and understand how to succeed in their collaboration-focused projects, organizations have to examine both tech- and employee-related challenges they may face. Challenges rooted in technologies From the enterprise Collaboration tools' deployment model to its customization and integration flexibility, companies should consider a whole array of aspects before they decide which solution they will implement. Selecting a technologically suitable solution Finding a proper solution is a long process that requires companies to make several important decisions: Cloud or on-premises? By choosing the deployment type, organizations define their future infrastructure to run the solution, required management efforts, data location, and the amount of customization available. Cloud solutions can help enterprises save both technical and human resources. However, companies often mistrust them because of multiple security concerns. On-premises solutions can be attractive from the customization, performance, and security points of view, but they are resource-demanding and expensive due to high licensing costs. Ready-to-use or custom? Today many vendors offer ready-made enterprise collaboration tools, particularly in the field of enterprise intranets. This option is attractive for organizations because they can save on customizing a solution from scratch. However, with ready-made products, organizations can face a bigger risk of following a vendor’s rigid politics (subscription/ownership price, support rates, functional capabilities, etc.). If companies choose custom enterprise collaboration software, they have a wider choice of IT service providers to cooperate with and adjust their solutions to their needs. One tool or several integrated tools? Some organizations prefer using a couple of apps that cover different collaboration needs (for example, document management, video conferencing, instant messaging). At the same time, companies can also go for a centralized solution, such as SharePoint or Office 365 that can support all collaboration types and let users create a centralized enterprise collaboration environment. Exploring integration options Collaboration isn’t an isolated process. It is tightly related to business or organizational activities that employees do. That’s why integration capabilities are among the most critical aspects companies should check before investing in their collaboration stack. Connecting an enterprise Collaboration tool to ERP, CRM, HRM, or ITSM solutions will not only contribute to the business process consistency but will also reduce the risk of collaboration gaps and communication inconsistencies. Planning ongoing investment Like any other business solution, an enterprise collaboration tool requires financial investment to implement, customize (even ready-made solutions require tuning), and support it. The initial budget will strongly depend on the deployment type, the estimated number of users, and needed customizations. While planning their yearly collaboration investment, companies should remember that their budgets should cover not only the activities necessary to ensure the solution’s technical health but also a user adoption program. Eliminating duplicate functionality Let’s consider the following scenario: a company implements a collaboration tool that includes the project management functionality, while they also run a legacy project management system. The same situation can happen with time tracking, document management, knowledge management systems, and other stand-alone solutions. In this case, it will be reasonable to consider switching to the new suite completely and depriving the legacy one. For example, by choosing SharePoint Server or Online, organizations can unite various functions within a single solution. To ensure a smooth transition to a new environment, SharePoint developers can migrate all the data from legacy systems, thus making it part of the new solution. Choosing a security vector As mentioned before, the solution’s deployment model dictates the security measures that organizations have to take. Sometimes security is the paramount reason that holds enterprises’ collaboration initiatives back. Security concerns are particularly characteristic of organizations that hesitate between on-premises and cloud solutions. SharePoint and Office 365 trends 2018 show that security represents the major worry for organizations that consider changing their on-premises deployments for cloud environments. What’s even more surprising is that while software providers, like Microsoft, are continually improving their security measures, the degree of concern keeps on growing. The report mentioned above reveals that 50% of businesses were concerned about security in 2018 compared to 36% in 2017 and 32% in 2016. Human-related challenges Technology challenges are multiple, but they all can be solved quite quickly, especially if a company partners with a professional IT service provider that backs them up at the tech level. At the same time, companies should be ready to face employee-related barriers that may ruin their collaboration effort. Changing employees’ typical style of collaboration Don’t expect that your employees will welcome the new collaboration solution. It’s about to change their typical collaboration style, which may be difficult for many. Some employees won’t share their knowledge openly, while others will find it difficult to switch from one-to-one discussions to digitized team meetings. In this context, change management should work at two levels: a technological one and a mental one. Companies should not just explain to employees how to use the new solution effectively, but also show each team how to adapt the collaboration system to the needs of each team member without damaging the usual collaboration flow. Finding the right tools for collaborators and non-collaborators Every team consists of different personalities. Some people can be open to collaboration; others can be quite hesitant. The task is to ensure a productive co-work of these two very different types of employees and everyone in between. Teams shouldn’t wait for instant collaboration consistency or general satisfaction. These are only possible to achieve if the entire team works together to create an optimal collaboration area for each individual. Launching digital collaboration within large distributed teams When it’s about organizing collaboration within a small or medium-sized team, collaboration difficulties can be quite simple to avoid, as the collaboration flow is moderate. But when it comes to collaboration in big teams, the risk of failure increases dramatically. Organizing effective communication of remote employees, connecting distributed offices, offering relevant collaboration areas to the entire team and subteams, enable cross-device consistency of collaboration — these are just a few steps to undertake for effective teamwork. Preparing strategies to overcome adoption difficulties He biggest human-related the poor adoption of an enterprise collaboration system. It can be hard for employees to get used to the new solution, accept the new communication medium, its UI and logic. Adoption issues are critical to address because they may engender more severe consequences than the tech-related ones. Say, if there is a functional defect in a solution, a company can fix it within a few days. However, if there are adoption issues, it means that all the efforts an organization puts into technology polishing can be blown away because their employees don’t use the solution at all. Ongoing training and communication between collaboration manager and particular teams is a must to keep employees’ satisfied with the solution they use. Is there more pain than gain? On recognizing all the challenges, companies might feel that there are too many barriers to overcome to get a decent collaboration solution. So maybe it’s reasonable to stay away from the collaboration race? Is it the case? Not really. If you take a look at Internet Trends 2018, you will see that there are multiple improvements that companies get as they adopt enterprise collaboration tools. Typical advantages include reduced meeting time, quicker onboarding, less time required for support, more effective document management, and a substantial rise in teams’ productivity. If your company wants to get all these advantages, be brave to face the possible collaboration challenges to get a great reward. Author Bio Sandra Lupanova is SharePoint and Office 365 Evangelist at Itransition, a software development and IT consulting company headquartered in Denver. Sandra focuses on the SharePoint and Office 365 capabilities, challenges that companies face while adopting these platforms, as well as shares practical tips on how to improve SharePoint and Office 365 deployments through her articles.
Read more
  • 0
  • 0
  • 4092