Following the documentation of using docker containers for job deployments I had to run ./bin/dssadmin/build-code-env-images --all
However, this command isn't recognised by dssadmin itself although it is listed as a command in the CLI.
[dataiku@dataiku-production dataiku-data]$ ./bin/dssadmin
Usage: dssadmin ACTION [ARGUMENT ...]
install-hadoop-integration [-keytab KEYTAB_FILE_LOCATION -principal KERBEROS_PRINCIPAL]
install-spark-integration [-sparkHome SPARK_HOME] [-pysparkPython PYSPARK_PYTHON]
install-h2o-integration [-sparklingWaterDir SPARKLING_WATER_DIR]
install-R-integration [-noDeps] [-repo REPO_URL | -pkgDir DIR]
build-container-exec-base-image [-tag TAG]
build-container-exec-code-env-images [--all] [--dry-run]
build-container-exec-base-image [-t TAG]
build-code-env-images [--all] [--dry-run]
build-mad-kubernetes-base-image [-tag TAG]
install-monitoring-integration -graphiteServer HOSTNAME:PORT [-prefix PREFIX] [-pkg LOCAL_COLLECTD_PACKAGE]
install-impersonation [-pythonBin PYTHONBIN] [-noInstallSudoSnippet] DSS_USER
run-diagnosis [OPTIONS] OUTPUT_FILE
[dataiku@dataiku-production dataiku-data]$ ./bin/dssadmin build-code-env-images
[-] Unsupported action build-code-env-images
Afterwards, I ran build-container-exec-code-env-images but this didn't build any sort of usable image in the container exec settings.
Am I overlooking something?
There is some duplication in this help message, which will be amended in the next DSS release.
If you are trying to build container execution images for code envs, remember to first mark for which container execution configuration you want to make code env available, in the code env's settings. Then updating the code env should build the corresponding docker image(s) (provided you already have a base image).
The build-container-exec-code-env-images command is rather when you want to rebuild those images for all code envs , e.g. after rebuilding the base image (required when you upgrade DSS to a newer version).