
Linux containers are changing the way companies think about service development and deployment. Containers play a vital role in the modern data-center, and Docker is leading the way. This course covers all the core features of Docker including: container creation and management, interacting with Docker hub, using Dockerfile to create and manage custom images, advanced Docker networking (how to safely expose container services to the world, and link containers),.
Interested in attending? Have a suggestion about running this event near you?
Register your interest now
Description
- Container Technology Overview
- Application Management Landscape
- Application Isolation
- Resource Measurement and Control
- Container Security
- OverlayFS Overview
- Container Security
- Open Container Initiative
- Docker Alternatives
- Docker Ecosystem
Lab Tasks
- Container Concepts runC
- Container Concepts Systemd
- Installing Docker
- Installing Docker
- Docker Architecture
- Starting the Docker Daemon
- Docker Daemon Configuration
- Docker Control Socket
- Enabling TLS for Docker
- Validating Docker Install
Lab Tasks
- Installing Docker
- Protecting Docker with TLS
- Managing Containers
- Creating a New Container
- Listing Containers
- Managing Container Resources
- Running Commands in an Existing Container
- Interacting with a Running Container
- Stopping, Starting, and Removing Containers
- Copying files in/out of Containers
- Inspecting and Updating Containers
- Docker Output Filtering & Formatting
Lab Tasks
- Managing Containers
- Configure a docker container to start at boot.
- Managing Images
- Docker Images
- Listing and Removing Images
- Searching for Images
- Downloading Images
- Uploading Images
- Export/Import Images
- Save/Load Images
- Committing Changes
Lab Tasks
- Docker Images
- Docker Platform Images
- Creating Images with Dockerfile
- Dockerfile
- Caching
- docker image build
- Dockerfile Instructions
- ENV and WORKDIR
- Running Commands
- Getting Files into the Image
- Defining Container Executable
- HEALTHCHECK
- Best Practices
- Multi-Stage builds with Dockerfile
Lab Tasks
- Dockerfile Fundamentals
- Optimizing Image Build Size
- Image Builds and Caching
- Docker Volumes
- Volume Concepts
- The docker volume Command
- Creating and Using Internal Volumes
- Internal Volume Drivers
- Removing Volumes
- Creating and Using External Volumes
- SELinux Considerations
- Mapping Devices
Lab Tasks
- Docker Internal Volumes
- Docker External Volumes
- Docker Compose/Swarm
- Writing YAML Files
- Concepts
- Compose CLI
- Defining a Service Set
- Legacy Compose Versions
- Docker Engine Swarm Mode
- Docker Swarm Terms
- Docker Swarm Command Overview
- Creating a Swarm
- Creating Services
- Creating Secrets
- Stack Files
- Stack Command
- Swarm Placements
- Swarm Resource Limits & Reservations
- Swarm Networking
- Swarm Networking Troubleshooting
Lab Tasks
- Docker Compose
- Docker Engine Swarm Mode
- Docker Networking
- Overview
- Data-Link Layer Details
- Network Layer Details
- Hostnames and DNS
- Service Reachability
- Container to Container Communication
- Container to Container: Links (deprecated)
- Container to Container: Private Network
- Managing Private Networks
- Remote Host to Container
Lab Tasks
- Docker Networking
- Exposing Ports
- Docker Networking
- Docker Logging
- Docker Logging
- Docker Logging with json-file and journald
- Docker Logging with syslog
- Docker Logging with Graylog or Logstash
- Docker Logging with Fluentd
- Docker Logging with Amazon or Google
- Docker Logging with Splunk
Lab Tasks
- Logging to syslog
- Docker Registry Lab Tasks
- Docker Registry
- Docker Registry (secured)
- Docker Content Trust
Audience
- System Administrators: Those responsible for the day-to-day management of server infrastructure, looking to enhance their skills in container orchestration with Kubernetes.
- DevOps Engineers: Professionals focused on automating application deployment and infrastructure management who want to deepen their expertise in Kubernetes to streamline their CI/CD pipelines.
- Cloud Engineers: Individuals working with cloud platforms who seek to leverage Kubernetes for deploying scalable and resilient applications across public, private, or hybrid cloud environments.
- Site Reliability Engineers (SREs): Those who ensure high availability and performance of applications and services, interested in Kubernetes for its self-healing capabilities and management of service-level objectives.
- Developers: Software developers interested in understanding how their applications are deployed and managed within Kubernetes environments to better align their development practices with operational requirements.
- Technical Leads and Architects: Decision-makers looking to design and implement cloud-native applications with Kubernetes at the core of their architecture.
Prerequisites
- Proficiency with the Linux CLI (GL120 "Linux Fundamentals")
- A broad understanding of Linux system administration (GL250 "Enterprise Linux Systems Administration").