Install Llama 2 on Code Env (Via Llama.cpp with CPython)
Hello Dataikers!
TL;DR
I want to use this model llama-2-7b.ggmlv3.q4_0.bin inside a code env, to do this I am using python port llama-cpp-python
My problem is that I need to compile llama.cpp and install shared lib. How can I do this?
PD: This code env will be containerized
Saul
Answers
-
Miguel Angel Dataiker, Dataiku DSS Core Designer, Dataiku DSS ML Practitioner, Dataiku DSS Adv Designer, Registered Posts: 118 Dataiker
Hi,
According to the 'llama-cpp-python' documentation:
pip install llama-cpp-python The above command will attempt to install the package and build llama.cpp from source. This is the recommended installation method as it ensures that llama.cpp is built with the available optimizations for your system.
so you should not need to compile llama.cpp as a separate step.
If you experience any issues installing 'llama-cpp-python' in DSS, we'll need a code env dialog in order to start our investigation. Please note that this package has no wheels , so among other things you'll need some OS packages installed in your server in order to install from source, for instance CMake
-
Hello Miguel,
Thanks for your response,
Yeah this is roadblock: Compile llama.cpp on containers (Compile C/C++).
During a call with Dataiku Support, I was informed that:
- For local Code Env, my Dataiku administrator needs to compile and install llama.cpp
- For Containerized Code Env, there is no way to precompile packages, this can be done altering manifest of container, but also it is tied to Dataiku policy of support, we cannot edit manifests.
So it is locked, There is no way to make it work, probably in the future Dataiku team can an additional way to precompile GCC libs before install python packages.
- Saul