Kubernetes: A case Study of Huawei

Rishi Agrawal
4 min readMar 14, 2021

In this post I am going to discuss a case study Huawei, What was the challenge they faced and how they used the Kubernetes to solve there challenge. First of all, lets dive into Kubernetes, what is it and how it is working.

What is Kubernetes?

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. Developed and maintained by the Google, it has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Evolution of deployment (source: google)

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

What was the challenge faced by the HUAWEI?

Huawei is a Multi-national company with the 1800 employee around the globe, is the largest manufacturing telecom equipment industry in the market.

In order to support its fast business development around the globe, Huawei has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility.

“If you’re a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology.”

— PEIXIN HOU, CHIEF SOFTWARE ARCHITECT AND COMMUNITY DIRECTOR FOR OPEN SOURCE

Solution

After deciding to use container technology, Huawei began moving the internal I.T. department’s applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.

Impacts

“By the end of 2016, Huawei’s internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution,” says Hou. “The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold.” For the bottom line, he says, “We also see significant operating expense spending cut, in some circumstances 20–30 percent, which we think is very helpful for our business.” Given the results Huawei has had internally — and the demand it is seeing externally — the company has also built the technologies into FusionStage™, the PaaS solution it offers its customers.

In the past, Huawei had used virtual machines to encapsulate applications, but “every time when we start a VM,” Hou says, “whether because it’s a new service or because it was a service that was shut down because of some abnormal node functioning, it takes a lot of time.” Huawei turned to containerization, so the timing was right to try Kubernetes. It took a year to adopt that engineer’s suggestion — the process “is not overnight,” says Hou — but once in use, he says, “Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy.”

Hou sees great benefits to the company that come with using this technology: “Kubernetes brings agility, scale-out capability, and DevOps practice to the cloud-based applications,” he says. “It provides us with the ability to customize the scheduling architecture, which makes possible the affinity between container tasks that gives greater efficiency. It supports multiple container formats. It has extensive support for various container networking solutions and container storage.”

“In the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There’s still 20 percent that’s not, but it’s fine. If we can make 80 percent of our workload really be cloud native, to have agility, it’s a much better world at the end of the day.”

Thanks Everyone for reading, hope you got the glimpse of the case study!

--

--

Rishi Agrawal

Aspiring MLOPS engineer with Multi-cloud and Flutter/MERN