Index | Thread | Search

From:
Percy Piper <piper.percy@googlemail.com>
Subject:
Re: Update llama.cpp to b5372
To:
ports@openbsd.org
Date:
Wed, 14 May 2025 20:41:19 +0100

Download raw body.

Thread
Lovely. Thank you very much.

-P

On 14/05/2025 18:50, Stuart Henderson wrote:
> pretty good - only things left to do are library bumps, and register a
> dependency on curl. I'll take care of it and commit later this evening.
> 
> 
> On 2025/05/14 17:33, Percy Piper wrote:
>> Hi.
>>
>> Fairly heavy user of this on amd64 -current (Vulkan).
>>
>> Many useful improvements including support for newer models (Gemini3, QWEN3
>> etc.) and llama-server now exits gracefully from ^C.
>>
>> I don't have much clue about ports, so apologies for any errors. No current
>> maintainer so sending directly to ports@ - hope that's okay.
>>
>> (https://github.com/ggml-org/llama.cpp/pull/13541)
>>
>> Percy.
>>
>>
>>
>> Index: Makefile
>> ===================================================================
>> RCS file: /cvs/ports/misc/llama.cpp/Makefile,v
>> diff -u -p -r1.7 Makefile
>> --- Makefile	18 Feb 2025 00:02:17 -0000	1.7
>> +++ Makefile	14 May 2025 16:05:05 -0000
>> @@ -8,7 +8,7 @@ COMMENT =		LLM inference system
>>
>>   GH_ACCOUNT =		ggerganov
>>   GH_PROJECT =		llama.cpp
>> -GH_TAGNAME =		b4706
>> +GH_TAGNAME =		b5372
>>   PKGNAME =		llama-cpp-0.0.${GH_TAGNAME:S/b//}
>>
>>   SHARED_LIBS +=		ggml-base 0.0
>> Index: distinfo
>> ===================================================================
>> RCS file: /cvs/ports/misc/llama.cpp/distinfo,v
>> diff -u -p -r1.2 distinfo
>> --- distinfo	13 Feb 2025 12:21:58 -0000	1.2
>> +++ distinfo	14 May 2025 16:05:05 -0000
>> @@ -1,2 +1,2 @@
>> -SHA256 (llama.cpp-b4706.tar.gz) =
>> jpINppeW9Vu/jeqf9gnJPsZ1Hkpkj6YWOHbJSAcPwxc=
>> -SIZE (llama.cpp-b4706.tar.gz) = 20705861
>> +SHA256 (llama.cpp-b5372.tar.gz) =
>> 28q/8fqCc/rtzo8qbUtQ6njW/Zs3mzaLH0ynCWIkuk8=
>> +SIZE (llama.cpp-b5372.tar.gz) = 21147804
>> Index: patches/patch-common_common_cpp
>> ===================================================================
>> RCS file: patches/patch-common_common_cpp
>> diff -N patches/patch-common_common_cpp
>> --- /dev/null	1 Jan 1970 00:00:00 -0000
>> +++ patches/patch-common_common_cpp	14 May 2025 16:05:05 -0000
>> @@ -0,0 +1,12 @@
>> +Index: common/common.cpp
>> +--- common/common.cpp.orig
>> ++++ common/common.cpp
>> +@@ -830,7 +830,7 @@ std::string fs_get_cache_directory() {
>> +     if (getenv("LLAMA_CACHE")) {
>> +         cache_directory = std::getenv("LLAMA_CACHE");
>> +     } else {
>> +-#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX)
>> ++#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX) ||
>> defined(__OpenBSD__)
>> +         if (std::getenv("XDG_CACHE_HOME")) {
>> +             cache_directory = std::getenv("XDG_CACHE_HOME");
>> +         } else {
>> Index: patches/patch-tools_rpc_rpc-server_cpp
>> ===================================================================
>> RCS file: patches/patch-tools_rpc_rpc-server_cpp
>> diff -N patches/patch-tools_rpc_rpc-server_cpp
>> --- /dev/null	1 Jan 1970 00:00:00 -0000
>> +++ patches/patch-tools_rpc_rpc-server_cpp	14 May 2025 16:05:05 -0000
>> @@ -0,0 +1,12 @@
>> +Index: tools/rpc/rpc-server.cpp
>> +--- tools/rpc/rpc-server.cpp.orig
>> ++++ tools/rpc/rpc-server.cpp
>> +@@ -111,7 +111,7 @@ static std::string fs_get_cache_directory() {
>> +     if (getenv("LLAMA_CACHE")) {
>> +         cache_directory = std::getenv("LLAMA_CACHE");
>> +     } else {
>> +-#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX)
>> ++#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX) ||
>> defined(__OpenBSD__)
>> +         if (std::getenv("XDG_CACHE_HOME")) {
>> +             cache_directory = std::getenv("XDG_CACHE_HOME");
>> +         } else {
>> Index: pkg/PLIST
>> ===================================================================
>> RCS file: /cvs/ports/misc/llama.cpp/pkg/PLIST,v
>> diff -u -p -r1.3 PLIST
>> --- pkg/PLIST	13 Feb 2025 12:21:59 -0000	1.3
>> +++ pkg/PLIST	14 May 2025 16:05:05 -0000
>> @@ -8,28 +8,23 @@ bin/convert_hf_to_gguf.py
>>   @bin bin/llama-embedding
>>   @bin bin/llama-eval-callback
>>   @bin bin/llama-export-lora
>> -@bin bin/llama-gbnf-validator
>> +@bin bin/llama-finetune
>>   @bin bin/llama-gen-docs
>>   @bin bin/llama-gguf
>>   @bin bin/llama-gguf-hash
>>   @bin bin/llama-gguf-split
>>   @bin bin/llama-gritlm
>>   @bin bin/llama-imatrix
>> -@bin bin/llama-infill
>> -@bin bin/llama-llava-cli
>> -@bin bin/llama-llava-clip-quantize-cli
>>   @bin bin/llama-lookahead
>>   @bin bin/llama-lookup
>>   @bin bin/llama-lookup-create
>>   @bin bin/llama-lookup-merge
>>   @bin bin/llama-lookup-stats
>> -@bin bin/llama-minicpmv-cli
>> +@bin bin/llama-mtmd-cli
>>   @bin bin/llama-parallel
>>   @bin bin/llama-passkey
>>   @bin bin/llama-perplexity
>>   @bin bin/llama-quantize
>> -@bin bin/llama-quantize-stats
>> -@bin bin/llama-qwen2vl-cli
>>   @bin bin/llama-retrieval
>>   @bin bin/llama-run
>>   @bin bin/llama-save-load-state
>> @@ -45,6 +40,7 @@ include/ggml-alloc.h
>>   include/ggml-backend.h
>>   include/ggml-blas.h
>>   include/ggml-cann.h
>> +include/ggml-cpp.h
>>   include/ggml-cpu.h
>>   include/ggml-cuda.h
>>   include/ggml-kompute.h
>> @@ -68,5 +64,5 @@ lib/cmake/llama/llama-version.cmake
>>   @lib lib/libggml-vulkan.so.${LIBggml-vulkan_VERSION}
>>   @lib lib/libggml.so.${LIBggml_VERSION}
>>   @lib lib/libllama.so.${LIBllama_VERSION}
>> -@lib lib/libllava_shared.so.${LIBllava_shared_VERSION}
>> +@so lib/libmtmd_shared.so
>>   lib/pkgconfig/llama.pc
> 
>