Kubernetes was originally built by some of the engineers at Google who were responsible for their internal container scheduler, Borg.
Learning how to run your own infrastructure with Kubernetes can give you some of the same superpowers that the site reliability engineers at Google utilize to ensure that Google's services are resilient, reliable, and efficient. Using Kubernetes allows you to make use of the knowledge and expertise that engineers at Google and other companies have built up by virtue of their massive scale.
Your organization may never need to operate at the scale of a company such as Google. You will, however, discover that many of the tools and techniques developed in companies that operate on clusters of tens of thousands of machines are applicable to organizations running much smaller deployments.
While it is clearly possible for a small team to manually configure and operate tens of machines, the automation needed at larger scales can make your life simpler and your software more reliable. And if you later need to scale up from tens of machines to hundreds or even thousands, you'll know that the tools you are using have already been battle tested in the harshest of environments.
The fact that Kubernetes even exists at all is both a measure of the success and a vindication of the open source/free software movement. Kubernetes began as a project to open source an implementation of the ideas and research behind Google's internal container orchestration system, Borg. Now it has taken on a life of its own, with the majority of its code now being contributed by engineers outside of Google.
The story of Kubernetes is not only one of Google seeing the benefits that open sourcing its own knowledge would indirectly bring to its own cloud business, but it's also one of the open source implementations of the various underlying tools that were needed coming of age.
Linux containers had existed in some form or another for almost a decade, but it took the Docker project (first open sourced in 2013) for them to become widely used and understood by a large enough number of users. While Docker did not itself bring any single new underlying technology to the table, its innovation was in packaging the tools that already existed in a simple and easy-to-use interface.
Kubernetes was also made possible by the existence of etcd, a key-value store based on the Raft consensus algorithm that was also first released in 2013 to form the underpinnings of another cluster scheduling tool that was being built by CoreOS. For Borg, Google had used an underlying state store based on the very similar Paxos algorithm, making etcd the perfect fit for Kubernetes.
Google were prepared to take the initiative to create an open source implementation of the knowledge which, up until that point, had been a big competitive advantage for their engineering organization at a time when Linux containers were beginning to become more popular thanks to the influence of Docker.
However, in my view, the simplicity of the language itself makes it such a good choice for open source infrastructure tools, because such a wide variety of developers can pick up the basics of the language in a few hours and start making productive contributions to a project.
If you are interested in finding out more about the go programming language, you could try taking a look at https://tour.golang.org/welcome/1 and then spend an hour looking at https://gobyexample.com.