How Containers Are Making Way for the 5G and Edge-Centric World

How Containers Are Making Way for the 5G and Edge-Centric World | September 26, 2019

How Containers Are Making Way for the 5G and Edge-Centric World

Containers are a form of virtualization, and Network Function Virtualization (NFV) is applying virtualization technologies to the telco world and its functions. It’s important, however, to note that containers are not the same thing as NFV. InFocus has been covering the subject of NFV and Software-Defined Networking (SDN) technologies for the past year, but I pose the question, how do containers fit in?

In this blog series, I will discuss the origins of the container and its affiliation with Airship (Part I), hone in on the architecture of the container in Part II and in Part III, discuss its role in the future.

Airship and Open Infrastructure
You have probably seen the news of the collaboration between AT&T and Dell Technologies around AT&T’s Network Cloud, powered by Airship, a collection of open-source tools for automating cloud provisioning and management.

On August 15, 2019, Amy Wheelus, V.P. of AT&T Network Cloud said:

This collaboration will not only enable us to accelerate the AT&T Network Cloud on the Dell Technologies infrastructure but also further the broader community goal of making it as simple as possible for operators to deploy and manage open infrastructure in support of SDN and other workloads.

No alt text provided for this image

Further, our very own V.P. of Service Provider solutions, Kevin Shatzkamer, recently stated:

Dell Technologies is working closely with AT&T to combine our joint telco industry best practices with decades of data center transformation experience to help service providers quickly roll out new breeds of experiential Edge and 5G services.

Refer to Figure 2 for a detailed visual of the Airship process:

No alt text provided for this image
Figure 2: The Airship process. Source: airshipit.com

The Modern Container
First thing’s first: we are not talking about the containers you find at “Bed Bath and Beyond” to store your miscellaneous junk. And, although most people have heard about containers in the context of IT in the last 2-5 years, the container concept itself is not new. Back in the early 80s, the chroot system was added to BSD[1] after being first developed for Version 7 Unix in 1979.

This system provided an isolated operating environment where applications and services could run. Chroot would change the apparent route directory for the running process and the children of such processes. This launched a program that is executed in such a protected-modified environment that you wouldn’t be able to access files outside that designated directory tree. Unix, like systems such as FreeBSD and Linux, were always built around security and user isolation, making that the point in history where the grandfather of the modern container was born.

The “father” of containers came to the early 2000s with “jails” that built upon Chroot, adding advanced features – beyond files and processes – such as isolation for networking. This allows individual users to have their own IP addresses. Later, Sun brought Solaris containers that introduced the concept of individual segments, called zones, and later, these ideas were ported to Linux, giving birth to the Linux containers (2008-2009). These are the “fathers” of the modern container, brought to life in 2013 by Docker – what most people think of whenever container comes into fruition in the Cloud-Telco world.

The Evolution of Containers
Docker is probably the most well-known name in containers and has introduced a lot of new features, like a Command Line Interface (CLI), an Application Programming Interface (API), clustering managing tools, and more. However, there are other players in the area beyond the Unix/Linux systems. Take, for example, Microsoft, who is now deep into container implementation, has had containers since the release of Windows 10 and Windows Server 2016.

No alt text provided for this image
Figure 3: Cloud-computing. Source fastmetrics.com

I know what you’re thinking: “Hold on one second, Javier. Are you telling me that the idea of containers has been alive for 40 years?” Yes, my friend! The main idea is that old technology, features and implementation have changed significantly over the years. This is a recursive theme in technology; ideas are born, implemented, used for a while, discarded for other ideas, and then brought back to life with enhancements. After all, even the whole concept of Cloud-Computing is much older than most people think – it can be traced to the mainframes and dummy terminals in the 60s and 70s.

Utilization Efficiencies of VMs and Hypervisors
If we go back in time about 20 years to the birth of virtualization, Virtual Machines (VM) and hypervisors, we see that one of the driving forces was that typical servers run at an average utilization between 5 to 20 percent. This means that, on average, there was about 80% of computer power that was simply wasted. With hypervisors, we were able to increase the workload density significantly and reduce the energy wasted.

So, what is the problem, and what are the main differences between virtualization with hypervisors and containers? Look at Figure 4 to see what the main issue is: the inherent waste to all virtual machines.

No alt text provided for this image
Figure 4: Depiction of the inherent waste to all VMs.

At the end of the day, we use VMs to run applications on top. We really care for the applications, but the applications need to have an OS to access the physical resources (Compute, Storage, Network) needed. In other words, whenever we migrate an application, we need to drag the entire OS with it. So, we increase the utilization efficiency while looking at how much duplicated waste we create. Think of a highway full of cars. Most cars can seat 5 adults comfortably, yet in most cities during commuter peak hours, there is only one person per car. Now, envision that the person is the application, the car is the OS, and the highway is the actual physical resources. It is a tremendous waste.

Summary
To combat this issue of waste in VMs, it is important to dig deep into containers and learn how they work. Stay tuned for Part 2 where I’ll discuss the architecture of containers.

In the meantime, what are your theories on how to increase utilization efficiencies in containers concurrent to reducing waste?

[1] BSD or Berkeley Software Distribution is the name of distributions of source code from the University of California, Berkeley, which were originally extensions to AT&T’s Research UNIX® operating system.

 

Sources:

Zift Solutions

Javier Guillermo

 

Share:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.