Ben Golub, il CEO di Docker, in esclusiva ai microfoni di neen [INTERVIEW]

Ben Golub intervista
09 marzo 2015
0 commenti

Ben Golub è il CEO di Docker Inc. dal 2013, società a cui fa riferimento il progetto open-source Docker, che consente di automatizzare il deployment di applicazioni all’interno di contenitori di software. In esclusiva ai microfoni di neen risponde ad alcune domande di carattere business e tecnico. Di seguito l’intervista completa!

How did Docker come about and which needs was it intended to address?

Docker recognized the need for developers and sysadmins to have a more efficient and simplified way to build, run and ship applications. With an incredible and robust ecosystem backing the project, Docker provides an open platform for its users to build, run and ship applications every phase of the application lifecycle on any host. This provides faster delivery times and improved service levels for developers, entirely reshaping how companies conduct business by providing them with the tools to maintain a competitive edge in a crowded market.

What is the current growth rate?

Docker is considered one of the fastest growing open-source projects to date. At the beginning of 2014, Docker had been downloaded less than one million times. Now that number sits at over 200 million. We are supported by 770 contributors and there have been nearly 100,000 Dockerized applications. In less than 24 months, Docker, Inc. the corporate sponsor of the open source project has grown from 25 employees to nearly 100.

Which advantages has Docker when compared to traditional virtualization?

Docker and VMs are highly complementary, but Docker can also be leveraged as an alternative to virtualization, as more containerization can often less virtualization.  Docker containers are 100 percent portable, providing an isolated and light-weight solution that runs on the host’s operating system. Using containers makes distributed applications infrastructure independent and by extension removes dependencies to a specific type of VM.

Does Docker represent the future of virtualization? Will containers end up taking the place of virtualization or will they co-exist to meet two different requirements? 

Docker and VMs are complementary and can work best when used together, depending on the specific use case. Dockerized applications running in VMware in VMs on a common platform provide agility and interoperability while also delivering the control and efficiency IT/Ops require. In many cases, it is in the developer’s best interest to use both containers and VMs together, which is why we believe they can and will continue to co-exist.

One of the difficulties flagged up most often by sys admins is the possibility of managing all of the networking parts in a container, for example /etc/hosts and /etc/resolv.conf. What do you think about that? Do you agree? What future objectives does Docker have in relation to networking?

Docker is very focused on making new advances as it relates to software-defined networking for docker-based distributed applications.
There is incredible interest in the community now building new applications based upon Docker to make sure that there is a great flourishing ecosystem to support that effort. That is why Docker announced its acquisition of SocketPlane to have that team work with the community to drive a richer set of networking APIs.

As today, Docker does not offer a complete, centralized web-based system for container management. Is this set to change in the future? We know about the existence of open source products but none of them is ready to be used in production.

Docker just announced significant progress to its three orchestration tools: Docker Machine, Docker Swarm and Docker Compose. These solutions, albeit not, GUI based are powerful solutions that allow developers and operations alike to support and accelerate the entire application development lifecycle. These “batteries included but swappable” solutions support a rich third party ecosystem of swap-in solutions that provide great freedom of choice for our users which in turn provide assurance around application portability.

Certain companies have complained about the complexity of implementing Docker in production environments, and about the almost total lack of documentation for disaster recovery. Do you agree with these criticisms and are you doing anything to address them?

The rapid adoption of the Docker platform has been through the great benefits that it provides to both developers and sys admins alike. It is why we now have millions of users in less than 2 years of the projects life. Having said that, the technology and documentation is constantly evolving and maturing and there is most certainly more work we can do there to support end-users as they take the journey with Docker into production

A lot of companies are skeptical about the networking security between the Docker container and the host. What’s your take on it? Are they right, and are you making moves to boost security?

Any security strategy is based on the notion of having layers of security. Docker containers add a “no cost” layer of security to the application architectures on both bare-metal servers and VMs. No cost in the sense that Docker containers are a lightweight isolation approach that has no adverse impact on application or server resources. Docker leverages core Linux technologies such as namespaces and cgroups, process, file and device restrictions. These capabilities reduce the surface area for attack and provide application isolation. Moreover, the Docker model for container management rapidly facilitates the ability to update software with critical security patches at a speed and scale not previously available. 

There are certainly areas of improvement that can be made with a technology that is less than two years old, but we would articulate that applications are more secure with Docker containers than they were before. We are very confident in the evolution of these capabilities thus far and the roadmap we have established with the community. We are always working on ways to improve security capabilities, which can be expected given Docker’s rapid adoption and evolution. In early March, Nathan McCauley and Diogo Monica joined the Docker after building a wide range of security systems that allowed Square to move hundreds of millions of dollars per day.

Container technology is not new, indeed it goes back more than 10 years with Jails on BSD, and for Linux with Parallels Virtuozzo Containers back in 2001. In your opinion, what are the main differences between the recent containers and Docker, and what are the reasons behind the recent boom, not just from the world of hosting but also from enterprise?

While containers have been around for a while, they were complex and short of being considered developer-friendly. Docker took container technology and created an open platform that is not only simple for developers to use, but that also addresses the entire application lifecycle. Docker’s rapid growth can largely be attributed to the portability it provides, enabling users to build, ship and run distributed applications without being bound to the underlying infrastructure. The robust ecosystem that has developed around Docker is playing a major role in the evolution and adoption of Docker, and we are continuing to see an uptick in enterprises using Docker to transform their business strategies.

Would it be right to state that the Docker is not really a container technology but rather an orchestration software that makes using containers easier, more stable and more secure, and – thanks to the numerous partnerships into which Docker has entered – also more standardized?

Docker containers are the de facto standard container format. Docker is an open platform for building, shipping and running containerize applications anywhere. It has a rich toolset of capabilities that allow developers and sysadmins create a new generation of portable, dynamic and highly composable applications that allow organizations to innovate their applications hundreds of times a day.

Do you think that the main advantages of Docker are due to:

  • the possibility to isolate applications in containers?
  • the possibility to make the most of the HW resources compared to virtualization, eliminating the overheads of the various SOS?
  • the possibility for developers to transport applications in containers, as if they were black boxes?

Docker enables companies to fully realize the benefits of distributed applications. The main advantages of Docker are tied to portability and velocity of innovation. Docker containers are 100 percent portable across environments, ensuring that distributed applications can run in the same manner on a laptop, in a test environment, in a product baremetal data center to the cloud. That portability ensures that applications are never locked in to any one environment or cloud. Additionally, distributed applications based upon Docker can be iterated on dozens to hundreds of times of day where in comparison monolithic applications were likely to be innovated on in terms of weeks or months.

Luca  Zucconi Web Product Manager

Articoli che potrebbero interessarti