Docker Engine exposes a REST API which you are going to expend to manipulate your containers without the docker
CLI. The API exposes equal performance the expend of HTTP network calls. That that you just could script trendy Docker operations the expend of your well-liked programming language or remotely administration one amongst your hosts. The CLI internally relies on the an identical API to perform its constructed-in instructions.
On this article we’ll gape at the basics of enabling and the expend of the API. In the occasion you’ve bought a selected expend case in mind, focus on with the API reference documentation to identify the endpoints you would favor.
Enabling Docker Engine’s API
The API is integrated with Docker Engine. Any functioning Docker host is already primed to cater for API interactions. To train the provider, it be well-known to bind the Docker daemon to a TCP socket to boot to, or as a change of, the default Unix socket. Originate dockerd
with the -H
flag to specify the sockets to listen to on:
sudo dockerd -H unix:///var/lag/docker.sock -H tcp://0.0.0.0: 2375
This dispute exposes the API on port 2375. The default Unix socket is retained so the Docker CLI will continue functioning with none reconfiguration. Port 2375 is primitive for Docker by convention; feel free to swap it to swimsuit your atmosphere.
That that you just could create Docker continuously initiate with these settings by editing its systemd
provider configuration file. Edit or create /and so forth/systemd/design/docker.provider.d/alternate choices.conf
, procure the ExecStart
line, and modify it to encompass your additional flags:
[Service] ExecStart=/usr/bin/dockerd -H unix:///var/lag/docker.sock -H tcp://0.0.0.0: 2375
Reload your systemd
configuration to practice the swap:
sudo systemctl daemon-reload
Now you’re ready to agree with interaction with the Docker Engine API on 127.0.0.1: 2375
.
A Articulate On Security
The steps above dispute the API with no protection in any arrangement. Any individual with entry to port 2375 in your host can send arbitrary instructions to the Docker daemon, initiating new containers, filling your disk with pictures, or destroying present workloads.
That that you just may agree with to place your firewall to block connections to the port unless you’re intentionally exposing Docker in your network. In the occasion you label wish to enable a ways off entry, you are going to agree with to configure TLS for the daemon socket to restrict entry to authenticated purchasers.
When TLS is enabled you’ll wish to put in your daemon’s certificate authority on each and each of your purchasers. You’ll ought to perform the client certificate and client key with each and each set a query to you create. The steps to rob will rely upon the instrument you’re the expend of. Here’s an instance for curl
:
curl https://127.0.0.1: 2375/v1.41/containers/json --cert client-cert.pem --key client-key.pem
That that you just could fair no longer wish to bind a TCP socket the least bit relying in your expend case. In case your client helps Unix sockets, which that you just may likely expend Docker’s present one to create your connections:
curl --unix-socket /var/lag/docker.sock http://localhost/v1.41/containers
Here’s on the total a safer different than risking accidental publicity of a TCP socket.
Using the API
The API uses versioned endpoints so which that you just may likely pin to particular versions of set a query to and response formats. It is miles vital to price the version you’re the expend of before everything of each and each endpoint URL. v1.41 is the most fresh free up price in production Docker builds at the time of writing.
http://127.0.0.1: 2375/v1.41
API endpoints are grouped into REST purposeful devices. These map to Docker’s well-known object forms akin to containers, pictures, and volumes. That that you just could on the total procure the APIs for a selected object form inner the pass URL that starts with its name:
# APIs associated to container operations http://127.0.0.1: 2375/v1.41/containers
The API uses JSON for all files exchanges along with your HTTP client. Endpoints gather JSON set a query to bodies and self-discipline JSON responses in return. This may perhaps agree with to simplify files handling inner your applications as which that you just may likely expend the JSON library incorporated along with your programming atmosphere. Instruments bask in jq
may perhaps be primitive to condense, filter, and form responses when you’re experimenting in your terminal.
Frequent Endpoints
Because the API replicates the performance of Docker’s CLI, there’s too many that which that you just may likely mediate of endpoints to quilt all of them here. As an different we’ll existing a few of basically the most recurrently primitive alternate choices when it comes to Docker’s core performance.
Listing Containers
The total checklist of containers on the host may perhaps be obtained from the /containers/json
endpoint:
curl http://127.0.0.1: 2375/v1.41/containers/json
It defaults to surfacing completely working containers. Add all=valid
to the inquire string to encompass stopped containers too. Limit the total sequence of containers returned with the restrict
parameter, akin to restrict=10
. Only the 10 most no longer too prolonged ago-created containers will be incorporated.
Loads of diversified filters are readily accessible to prune the checklist to containers with particular attributes. These are put with the following syntax:
curl http://127.0.0.1: 2375/v1.41/containers/json?filters={"name":"demo"}
This URL returns the records associated with the demo
container. Completely different filters may perhaps be expressed in a an identical methodology akin to {"popularity": ["running","paused"]}
to rating working and paused containers or {"health=["healthy"]}
for completely healthy containers.
Beginning a Current Container
Beginning a container is a two-stage course of when the expend of the API. First it be well-known to create the container, then initiate it in a separate API call.
Create your container by making a POST
set a query to to the /containers/create
endpoint. This needs a JSON physique with fields that correspond to the flags authorized by the docker lag
CLI dispute.
Here’s a minimal instance of making a container:
curl http://127.0.0.1: 2375/v1.41/containers/create -X POST -H "Deliver material-Sort: application/json" -d '{"Image": "instance-image:latest"}'
The JSON response will encompass the brand new container’s ID and any warnings bobbing up from its creation. Send the ID in a call to the /containers/:identity/initiate
endpoint:
curl http://127.0.0.1: 2375/v1.41/containers/abc123/initiate -X POST
The container will now be working on the Docker host.
Cleansing Up Containers
Purchase away stopped containers by issuing a POST
set a query to to the /containers/prune
endpoint:
curl http://127.0.0.1: 2375/v1.41/containers/prune -X POST
You’ll be issued a JSON response with ContainersDeleted
and SpaceReclaimed
fields. ContainersDeleted
is an array of the container IDs that had been efficiently removed. SpaceReclaimed
informs of you the total storage situation freed in bytes.
This endpoint accepts two inquire string parameters to constrain the operation. The till
parameter limits the deletion to containers created before a given time, akin to till