You now have until September 15th to submit your use case or success story to the 2022 Dataiku Frontrunner Awards!ENTER YOUR SUBMISSION

Visual Timer Series exit code 132

jgrout
Level 1
Level 1
Visual Timer Series exit code 132

Hi, whenever I try to execute a Time Series training, I get:

 

Training failed

Read the logs
Process died (exit code: 132)
 
This is the Logs (LONG):
[2022/07/17-10:25:05.693] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.prediction]  - ******************************************
[2022/07/17-10:25:05.694] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.prediction]  - ** Start train session s1
[2022/07/17-10:25:05.695] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.prediction]  - ******************************************
[2022/07/17-10:25:05.702] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.data] T-kZz4APwd - [ct: 9] Need to compute sampleId before checking memory cache
[2022/07/17-10:25:05.703] [FT-TrainWorkThread-xNsxm5L8-298] [DEBUG] [dip.shaker.runner] T-kZz4APwd - [ct: 10] Script settings sampleMax=104857600 processedMax=-1
[2022/07/17-10:25:05.704] [FT-TrainWorkThread-xNsxm5L8-298] [DEBUG] [dip.shaker.runner] T-kZz4APwd - [ct: 11] Processing with sampleMax=104857600 processedMax=524288000
[2022/07/17-10:25:05.706] [FT-TrainWorkThread-xNsxm5L8-298] [DEBUG] [dip.shaker.runner] T-kZz4APwd - [ct: 13] Computed required sample id : 501a40d37c4ac05d834bd60e90ff3e78-NA-ac2e0aa81c1215a34ba9f85052ba5ff70--d751713988987e9331980363e24189ce
[2022/07/17-10:25:05.710] [FT-TrainWorkThread-xNsxm5L8-298] [DEBUG] [dku.shaker.cache] T-kZz4APwd - Shaker MemoryCache get on dataset DKU_TUT_TS_FORECAST.train key=ds=96e911ab462f54b931c419309458773a--scr=5cc56226f1d78a396fff9635599898eb--samp=501a40d37c4ac05d834bd60e90ff3e78-NA-ac2e0aa81c1215a34ba9f85052ba5ff70--d751713988987e9331980363e24189ce: hit
[2022/07/17-10:25:05.712] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 19] Column Ticker meaning=Text fail=0
[2022/07/17-10:25:05.712] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 19] Column Date meaning=Date fail=0
[2022/07/17-10:25:05.713] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 20] Column Open meaning=DoubleMeaning fail=0
[2022/07/17-10:25:05.714] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 21] Column High meaning=DoubleMeaning fail=0
[2022/07/17-10:25:05.715] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 22] Column Low meaning=DoubleMeaning fail=0
[2022/07/17-10:25:05.717] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 24] Column Close meaning=DoubleMeaning fail=0
[2022/07/17-10:25:05.717] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 24] Column Adj_close meaning=DoubleMeaning fail=0
[2022/07/17-10:25:05.719] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.shaker.schema] T-kZz4APwd - [ct: 26] Column Volume meaning=LongMeaning fail=0
[2022/07/17-10:25:05.725] [FT-TrainWorkThread-xNsxm5L8-298] [WARN] [dku.ml.prediction.split] T-kZz4APwd - Sorted train/test split: ordering column "Date" is not numeric
[2022/07/17-10:25:05.727] [Thread-145] [INFO] [dku.datasets.pull]  - pull background thread starting for train
[2022/07/17-10:25:05.731] [Thread-145] [INFO] [dku.datasets.file]  - Building Filesystem handler config: {"connection":"filesystem_managed","path":"/DKU_TUT_TS_FORECAST.train","notReadyIfEmpty":false,"filesSelectionRules":{"mode":"ALL","excludeRules":[],"includeRules":[],"explicitFiles":[]}}
[2022/07/17-10:25:05.731] [Thread-145] [INFO] [dku.datasets.ftplike]  - Enumerating Filesystem dataset prefix=
[2022/07/17-10:25:05.733] [Thread-145] [DEBUG] [dku.datasets.fsbased]  - Building FS provider for dataset handler: DKU_TUT_TS_FORECAST.train
[2022/07/17-10:25:05.736] [Thread-145] [DEBUG] [dku.datasets.fsbased]  - FS Provider built
[2022/07/17-10:25:05.737] [Thread-145] [DEBUG] [dku.fs.local]  - Enumerating local filesystem prefix=/
[2022/07/17-10:25:05.737] [Thread-145] [DEBUG] [dku.fs.local]  - Enumeration done nb_paths=1 size=51053
[2022/07/17-10:25:05.738] [Thread-145] [INFO] [dku.input.push]  - USTP: push selection.method=HEAD_SEQUENTIAL records=100000 ratio=0.02 col=null
[2022/07/17-10:25:05.740] [Thread-145] [INFO] [dku.format]  - Extractor run: limit={"maxBytes":-1,"maxRecords":100000,"ordering":{"enabled":false,"rules":[]}} totalRecords=0
[2022/07/17-10:25:05.751] [Thread-145] [INFO] [dku]  - getCompression filename=**out-s0.csv.gz**
[2022/07/17-10:25:05.753] [Thread-145] [INFO] [dku]  - getCompression filename=**out-s0.csv.gz**
[2022/07/17-10:25:05.756] [Thread-145] [INFO] [dku.format]  - Start compressed [GZIP] stream: /home/dataiku/dss/managed_datasets/DKU_TUT_TS_FORECAST.train/out-s0.csv.gz / totalRecsBefore=0
[2022/07/17-10:25:05.757] [Thread-145] [INFO] [dku]  - getCompression filename=**out-s0.csv.gz**
[2022/07/17-10:25:05.757] [Thread-145] [INFO] [dku]  - getCompression filename=**out-s0.csv.gz**
[2022/07/17-10:25:05.936] [Thread-145] [INFO] [dku.format]  - after stream totalComp=51053 totalUncomp=177966 totalRec=2298
[2022/07/17-10:25:05.937] [Thread-145] [INFO] [dku.format]  - Extractor run done, totalCompressed=51053 totalRecords=2298
[2022/07/17-10:25:05.937] [Thread-145] [DEBUG] [dku.datasets.pull]  - pull background thread: ending queue,  cursize=1131
[2022/07/17-10:25:05.937] [Thread-145] [INFO] [dku.datasets.pull]  - pull background thread finished for train
[2022/07/17-10:25:05.951] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.datasets.pull] T-kZz4APwd - End of stream reached
[2022/07/17-10:25:05.951] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dip.sorter.chunk] T-kZz4APwd - Spilling chunk. used=1275132
[2022/07/17-10:25:06.288] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.ml.prediction.split] T-kZz4APwd - [ct: 595] Sorted train/test split: threshold = 2021-12-27T00:00:00.000Z
[2022/07/17-10:25:06.301] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.splits] T-kZz4APwd - [ct: 608] Checking if splits are up to date. Policy: type=SPLIT_SINGLE_DATASET,split=SORTED,splitBeforePrepare=true,ds=train,sel=(method=head-s,records=100000),streamAll=true,c=Date,ascending=true, instance id: fcd3c9cd8c3629ecf10bef89907ece0a-0
[2022/07/17-10:25:06.303] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.splits] T-kZz4APwd - [ct: 610] Search for split: p=type=SPLIT_SINGLE_DATASET,split=SORTED,splitBeforePrepare=true,ds=train,sel=(method=head-s,records=100000),streamAll=true,c=Date,ascending=true i=fcd3c9cd8c3629ecf10bef89907ece0a-0
[2022/07/17-10:25:06.307] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.splits] T-kZz4APwd - [ct: 614] Search for split: p=type=SPLIT_SINGLE_DATASET,split=SORTED,splitBeforePrepare=true,ds=train,sel=(method=head-s,records=100000),streamAll=true,c=Date,ascending=true i=fcd3c9cd8c3629ecf10bef89907ece0a-0
[2022/07/17-10:25:06.310] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.splits] T-kZz4APwd - [ct: 617] Checking if splits are up to date. Policy: type=SPLIT_SINGLE_DATASET,split=SORTED,splitBeforePrepare=true,ds=train,sel=(method=head-s,records=100000),streamAll=true,c=Date,ascending=true, instance id: fcd3c9cd8c3629ecf10bef89907ece0a-0
[2022/07/17-10:25:06.311] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.splits] T-kZz4APwd - [ct: 618] Search for split: p=type=SPLIT_SINGLE_DATASET,split=SORTED,splitBeforePrepare=true,ds=train,sel=(method=head-s,records=100000),streamAll=true,c=Date,ascending=true i=fcd3c9cd8c3629ecf10bef89907ece0a-0
[2022/07/17-10:25:06.316] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.splits] T-kZz4APwd - [ct: 623] Search for split: p=type=SPLIT_SINGLE_DATASET,split=SORTED,splitBeforePrepare=true,ds=train,sel=(method=head-s,records=100000),streamAll=true,c=Date,ascending=true i=fcd3c9cd8c3629ecf10bef89907ece0a-0
[2022/07/17-10:25:06.323] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.ml.python] T-kZz4APwd - [ct: 630] Joining processing thread ...
[2022/07/17-10:25:06.326] [MRT-300] [INFO] [dku.analysis.ml.python]  - Running a preprocessing set: pp1 in /home/dataiku/dss/analysis-data/DKU_TUT_TS_FORECAST/5xVsrsKi/kZz4APwd/sessions/s1/pp1
[2022/07/17-10:25:06.331] [MRT-300] [INFO] [dku.block.link]  - Started a socket on port 45393
[2022/07/17-10:25:06.338] [MRT-300] [INFO] [dku.ml.kernel]  - Writing output of python-single-command-kernel to /home/dataiku/dss/analysis-data/DKU_TUT_TS_FORECAST/5xVsrsKi/kZz4APwd/sessions/s1/pp1/train.log
[2022/07/17-10:25:06.339] [MRT-300] [INFO] [dku.code.envs.resolution]  - Executing Python activity in env: VTS
[2022/07/17-10:25:06.340] [MRT-300] [WARN] [dku.code.projectLibs]  - External libraries file not found: /home/dataiku/dss/config/projects/DKU_TUT_TS_FORECAST/lib/external-libraries.json
[2022/07/17-10:25:06.340] [MRT-300] [INFO] [dku.code.projectLibs]  - EXTERNAL LIBS FROM DKU_TUT_TS_FORECAST is {"gitReferences":{},"pythonPath":["python"],"rsrcPath":["R"],"importLibrariesFromProjects":[]}
[2022/07/17-10:25:06.342] [MRT-300] [INFO] [dku.code.projectLibs]  - chunkFolder is /home/dataiku/dss/config/projects/DKU_TUT_TS_FORECAST/lib/R
[2022/07/17-10:25:06.346] [MRT-300] [INFO] [dku.python.single_command.kernel]  - Starting Python process for kernel  python-single-command-kernel
[2022/07/17-10:25:06.346] [MRT-300] [INFO] [dip.tickets]  - Creating API ticket for analysis-ml-DKU_TUT_TS_FORECAST-d1kDRHF on behalf of admin id=analysis-ml-DKU_TUT_TS_FORECAST-d1kDRHF_AkxZGFeovyIz
[2022/07/17-10:25:06.347] [MRT-300] [INFO] [dku.security.process]  - Starting process (regular)
[2022/07/17-10:25:06.445] [MRT-300] [INFO] [dku.security.process]  - Process started with pid=2520
[2022/07/17-10:25:06.446] [MRT-300] [INFO] [dku.processes.cgroups]  - Will use cgroups []
[2022/07/17-10:25:06.446] [MRT-300] [INFO] [dku.processes.cgroups]  - Applying rules to used cgroups: []
[2022/07/17-10:25:06.480] [KNL-python-single-command-kernel-monitor-309] [DEBUG] [dku.resourceusage]  - Reporting start of CRU:{"context":{"type":"ANALYSIS_ML_TRAIN","authIdentifier":"admin","projectKey":"DKU_TUT_TS_FORECAST","analysisId":"5xVsrsKi","mlTaskId":"kZz4APwd","sessionId":"s1"},"type":"LOCAL_PROCESS","id":"TrNWhiBnUNqnLDkm","startTime":1658053506480,"localProcess":{"cpuCurrent":0.0}}
[2022/07/17-10:25:06.524] [process-resource-monitor-2520-313] [DEBUG] [dku.resource]  - Process stats for pid 2520: {"pid":2520,"commandName":"/home/dataiku/dss/code-envs/python/VTS/bin/python","cpuUserTimeMS":130,"cpuSystemTimeMS":20,"cpuChildrenUserTimeMS":0,"cpuChildrenSystemTimeMS":0,"cpuTotalMS":150,"cpuCurrent":0.0,"vmSizeMB":185,"vmRSSMB":11,"vmHWMMB":11,"vmRSSAnonMB":7,"vmDataMB":6,"vmSizePeakMB":185,"vmRSSPeakMB":11,"vmRSSTotalMBS":0,"majorFaults":0,"childrenMajorFaults":0}
Installing debugging signal handler
[2022-07-17 10:25:08,488] [2520/MainThread] [INFO] [dataiku.base.socket_block_link] Connecting to localhost (127.0.0.1) at port 45393
[2022-07-17 10:25:08,488] [2520/MainThread] [INFO] [dataiku.base.socket_block_link] Connected to localhost (127.0.0.1) at port 45393
[2022/07/17-10:25:08.492] [MRT-300] [INFO] [dku.link.secret_protected]  - Connected to kernel
[2022/07/17-10:25:08.503] [MRT-300] [INFO] [dku.block.link.interaction]  - Execute link command respClazz=true respTypeToken=false respIsString=false is=false asyncInputStream=false os=false
[2022-07-17 10:25:08,733] [2520/MainThread] [INFO] [dataiku.doctor.utils.dku_pickle] Setting cloudpickle as the pickling tool
/home/dataiku/dataiku-dss-11.0.0/python/dataiku/doctor/dkuapi.py:16: DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()
  argspec = inspect.getargspec(api)
[2022-07-17 10:25:08,906] [2520/MainThread] [INFO] [root] Running analysis command: train_prediction_timeseries
/home/dataiku/dss/code-envs/python/VTS/lib/python3.6/site-packages/gluonts/json.py:46: UserWarning: Using `json`-module for json-handling. Consider installing one of `orjson`, `ujson` to speed up serialization and deserialization.
  "Using `json`-module for json-handling. "
[2022/07/17-10:25:09.202] [KNL-python-single-command-kernel-monitor-309] [INFO] [dku.kernels]  - Process done with code 132
[2022/07/17-10:25:09.203] [KNL-python-single-command-kernel-monitor-309] [INFO] [dip.tickets]  - Destroying API ticket for analysis-ml-DKU_TUT_TS_FORECAST-d1kDRHF on behalf of admin
[2022/07/17-10:25:09.210] [KNL-python-single-command-kernel-monitor-309] [WARN] [dku.resource]  - stat file for pid 2520 does not exist. Process died?
[2022/07/17-10:25:09.210] [KNL-python-single-command-kernel-monitor-309] [DEBUG] [dku.resourceusage]  - Reporting completion of CRU:{"context":{"type":"ANALYSIS_ML_TRAIN","authIdentifier":"admin","projectKey":"DKU_TUT_TS_FORECAST","analysisId":"5xVsrsKi","mlTaskId":"kZz4APwd","sessionId":"s1"},"type":"LOCAL_PROCESS","id":"TrNWhiBnUNqnLDkm","startTime":1658053506480,"localProcess":{"pid":2520,"commandName":"/home/dataiku/dss/code-envs/python/VTS/bin/python","cpuUserTimeMS":130,"cpuSystemTimeMS":20,"cpuChildrenUserTimeMS":0,"cpuChildrenSystemTimeMS":0,"cpuTotalMS":150,"cpuCurrent":0.0,"vmSizeMB":185,"vmRSSMB":11,"vmHWMMB":11,"vmRSSAnonMB":7,"vmDataMB":6,"vmSizePeakMB":185,"vmRSSPeakMB":11,"vmRSSTotalMBS":0,"majorFaults":0,"childrenMajorFaults":0}}
[2022/07/17-10:25:09.212] [MRT-300] [INFO] [dku.kernels]  - Getting kernel tail
[2022/07/17-10:25:09.213] [MRT-300] [INFO] [dku.kernels]  - Trying to enrich exception: com.dataiku.dip.io.SocketBlockLinkIOException: Failed to get result from kernel from kernel com.dataiku.dip.analysis.coreservices.AnalysisMLKernel@bd607e2 process=null pid=?? retcode=132
[2022/07/17-10:25:09.313] [MRT-300] [INFO] [dku.kernels]  - Getting kernel tail
[2022/07/17-10:25:09.315] [MRT-300] [WARN] [dku.analysis.ml.python]  - Training failed
com.dataiku.dip.exceptions.ProcessDiedException: Process died (exit code: 132)
	at com.dataiku.dip.kernels.DSSKernelBase.maybeRethrowAsProcessDied(DSSKernelBase.java:284)
	at com.dataiku.dip.analysis.ml.prediction.PredictionTrainAdditionalThread.process(PredictionTrainAdditionalThread.java:78)
	at com.dataiku.dip.analysis.ml.shared.PRNSTrainThread.run(PRNSTrainThread.java:173)
[2022/07/17-10:25:09.316] [MRT-300] [INFO] [dku.block.link]  - Closed socket
[2022/07/17-10:25:09.316] [MRT-300] [INFO] [dku.block.link]  - Closed socket
[2022/07/17-10:25:09.317] [MRT-300] [INFO] [dku.block.link]  - Closed serverSocket
[2022/07/17-10:25:09.317] [MRT-300] [ERROR] [dku.analysis.ml.python]  - Processing failed
com.dataiku.dip.exceptions.ProcessDiedException: Process died (exit code: 132)
	at com.dataiku.dip.kernels.DSSKernelBase.maybeRethrowAsProcessDied(DSSKernelBase.java:284)
	at com.dataiku.dip.analysis.ml.prediction.PredictionTrainAdditionalThread.process(PredictionTrainAdditionalThread.java:78)
	at com.dataiku.dip.analysis.ml.shared.PRNSTrainThread.run(PRNSTrainThread.java:173)
[2022/07/17-10:25:09.317] [MRT-300] [INFO] [dku.analysis.ml]  - Locking model train info file /home/dataiku/dss/analysis-data/DKU_TUT_TS_FORECAST/5xVsrsKi/kZz4APwd/sessions/s1/pp1/m1/train_info.json
[2022/07/17-10:25:09.318] [MRT-300] [INFO] [dku.analysis.ml]  - Unlocking model train info file /home/dataiku/dss/analysis-data/DKU_TUT_TS_FORECAST/5xVsrsKi/kZz4APwd/sessions/s1/pp1/m1/train_info.json
[2022/07/17-10:25:09.319] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.ml.python] T-kZz4APwd - [ct: 3626] Processing thread joined ...
[2022/07/17-10:25:09.325] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.ml.python] T-kZz4APwd - [ct: 3632] Joining processing thread ...
[2022/07/17-10:25:09.517] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.ml.python] T-kZz4APwd - [ct: 3824] Processing thread joined ...
[2022/07/17-10:25:09.553] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis] T-kZz4APwd - [ct: 3860] Train done
[2022/07/17-10:25:09.568] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.prediction] T-kZz4APwd - Train done
[2022/07/17-10:25:09.596] [FT-TrainWorkThread-xNsxm5L8-298] [INFO] [dku.analysis.trainingdetails] T-kZz4APwd - Publishing mltask-train-done reflected event,
I have used the pre-described Visual Time Series code environment.
 
Here is a list of the installed packages:
backcall==0.2.0
certifi==2022.6.15
charset-normalizer==2.0.12
click==8.0.4
cloudpickle==1.5.0
convertdate==2.3.2
cycler==0.11.0
Cython==0.29.30
dataclasses==0.8
decorator==4.4.2
dill==0.3.3
Flask==1.0.4
gluonts==0.8.1
graphviz==0.8.4
hijri-converter==2.2.4
holidays==0.13
idna==3.3
importlib-metadata==4.8.3
importlib-resources==5.4.0
ipykernel==4.8.2
ipython==7.16.3
ipython-genutils==0.2.0
itsdangerous==2.0.1
jedi==0.17.2
Jinja2==2.10.3
joblib==1.1.0
jupyter-client==5.2.4
jupyter-core==4.4.0
kiwisolver==1.3.1
korean-lunar-calendar==0.2.1
lightgbm==3.2.1
MarkupSafe==2.0.1
matplotlib==3.3.4
mxnet==1.7.0.post1
numpy==1.19.5
pandas==1.0.5
parso==0.7.1
patsy==0.5.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.4.0
pmdarima==1.2.1
prompt-toolkit==3.0.30
ptyprocess==0.7.0
pydantic==1.9.1
Pygments==2.12.0
PyMeeus==0.5.11
pyparsing==3.0.9
python-dateutil==2.8.1
pytz==2020.5
pyzmq==18.0.2
requests==2.27.1
scikit-learn==0.20.4
scipy==1.2.3
simplegeneric==0.8.1
six==1.16.0
statsmodels==0.12.2
threadpoolctl==2.1.0
toolz==0.12.0
tornado==5.1.1
tqdm==4.64.0
traitlets==4.3.3
typing-extensions==3.10.0.2
urllib3==1.26.10
wcwidth==0.2.5
Werkzeug==2.0.3
xgboost==0.82
zipp==3.6.0
The computer has an i5-8000 series processor with AVX2 support.
 
 
Would you have any recommendations?  This problem has persisted for roughly the last 20 attempts to get this working.
Operating system used: Virtualbox
Operating system used: Virtualbox Linux
0 Kudos
3 Replies
sergeyd
Dataiker
Dataiker

Hi @jgrout 

While the underlying CPU has AVX support, the way how VirtualBox virtualize it, may result in this instructions set missing for the guest VM. Please check VM machine startup logs in VirtualBox to see if AVX is actually passed over to the guest VM.

0 Kudos
jgrout
Level 1
Level 1
Author

Thank you for the feedback.  I have looked at the VM logs, and it looks like AVX is mentioned multiple times:

 

00:00:05.104857 AVX - AVX support = 0 (1)

00:00:05.104866 AVX2 - Advanced Vector Extensions 2 = 0 (1)

00:00:05.104875 AVX512F - AVX512 Foundation instructions = 0 (0)

00:00:05.104881 AVX512PF - AVX512 Prefetch instructions = 0 (0)

00:00:05.104881 AVX512ER - AVX512 Exponential & Reciprocal instructions = 0 (0)

00:00:05.104882 AVX512CD - AVX512 Conflict Detection instructions = 0 (0)

 

Would you have any ideas of how to get this working?

 

 

0 Kudos
sergeyd
Dataiker
Dataiker

Hi, these "0"s mean that AVX instructions set have not passed over to the guest VM. 

It should be:

 

00:00:01.963497   AVX - AVX support                      = 1 (1)
00:00:01.963504   AVX2 - Advanced Vector Extensions 2    = 1 (1)

 

So it's either VirtualBox you are running, cannot pass them over (try getting the latest one), or VirtualBox doesn't properly virtualize this particular CPU so cannot ship AVX to the guest VM. 

You can also check if the guest machine inherits AVX with this command: 

lscpu | grep avx