Index | Thread | Search

From:
Percy Piper <piper.percy@googlemail.com>
Subject:
Update llama.cpp
To:
ports@openbsd.org
Date:
Thu, 11 Sep 2025 16:52:07 +0100

Download raw body.

Thread
Hi.

This update brings some nice improvements (e.g. gpt-oss support[1]).

However, I am unable to resolve a blocking issue. During the build
-I/usr/local/include appears before the source's include dirs, so any 
previously installed headers are found before those in the sources, 
causing the build to fail.

If the old package is uninstalled first, the port will build, package 
and run fine.

This seems likely to be a common scenario and I'm embarrased I haven't 
figured out how to solve it.

I had hoped to get this in shape before 7.8 so welcome any suggestions 
or hints on how this might be solved?

-P


[1] https://openai.com/index/introducing-gpt-oss
Index: Makefile
===================================================================
RCS file: /cvs/ports/misc/llama.cpp/Makefile,v
diff -u -p -r1.9 Makefile
--- Makefile	12 Jun 2025 00:03:18 -0000	1.9
+++ Makefile	11 Sep 2025 13:03:29 -0000
@@ -8,14 +8,14 @@ COMMENT =		LLM inference system
 
 GH_ACCOUNT =		ggerganov
 GH_PROJECT =		llama.cpp
-GH_TAGNAME =		b5372
+GH_TAGNAME =		b6447
 PKGNAME =		llama-cpp-0.0.${GH_TAGNAME:S/b//}
 
 SHARED_LIBS +=		ggml-base 1.0
 SHARED_LIBS +=		ggml-cpu 1.0
 SHARED_LIBS +=		ggml 1.0
 SHARED_LIBS +=		llama 2.0
-SHARED_LIBS +=		mtmd_shared 0.0
+SHARED_LIBS +=		mtmd 0.0
 SHARED_LIBS +=		ggml-vulkan 2.0
 
 CATEGORIES =		misc
Index: distinfo
===================================================================
RCS file: /cvs/ports/misc/llama.cpp/distinfo,v
diff -u -p -r1.3 distinfo
--- distinfo	15 May 2025 01:38:55 -0000	1.3
+++ distinfo	11 Sep 2025 13:03:29 -0000
@@ -1,2 +1,2 @@
-SHA256 (llama.cpp-b5372.tar.gz) = 28q/8fqCc/rtzo8qbUtQ6njW/Zs3mzaLH0ynCWIkuk8=
-SIZE (llama.cpp-b5372.tar.gz) = 21147804
+SHA256 (llama.cpp-b6447.tar.gz) = HdbOkBXyMXVzjSrVo+GuBrvL8rprWTjTyx3ErWUbvQk=
+SIZE (llama.cpp-b6447.tar.gz) = 25705612
Index: patches/patch-common_common_cpp
===================================================================
RCS file: patches/patch-common_common_cpp
diff -N patches/patch-common_common_cpp
--- patches/patch-common_common_cpp	15 May 2025 01:38:55 -0000	1.1
+++ /dev/null	1 Jan 1970 00:00:00 -0000
@@ -1,12 +0,0 @@
-Index: common/common.cpp
---- common/common.cpp.orig
-+++ common/common.cpp
-@@ -830,7 +830,7 @@ std::string fs_get_cache_directory() {
-     if (getenv("LLAMA_CACHE")) {
-         cache_directory = std::getenv("LLAMA_CACHE");
-     } else {
--#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX)
-+#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX) || defined(__OpenBSD__)
-         if (std::getenv("XDG_CACHE_HOME")) {
-             cache_directory = std::getenv("XDG_CACHE_HOME");
-         } else {
Index: patches/patch-tools_rpc_rpc-server_cpp
===================================================================
RCS file: patches/patch-tools_rpc_rpc-server_cpp
diff -N patches/patch-tools_rpc_rpc-server_cpp
--- patches/patch-tools_rpc_rpc-server_cpp	15 May 2025 01:38:55 -0000	1.1
+++ /dev/null	1 Jan 1970 00:00:00 -0000
@@ -1,12 +0,0 @@
-Index: tools/rpc/rpc-server.cpp
---- tools/rpc/rpc-server.cpp.orig
-+++ tools/rpc/rpc-server.cpp
-@@ -111,7 +111,7 @@ static std::string fs_get_cache_directory() {
-     if (getenv("LLAMA_CACHE")) {
-         cache_directory = std::getenv("LLAMA_CACHE");
-     } else {
--#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX)
-+#if defined(__linux__) || defined(__FreeBSD__) || defined(_AIX) || defined(__OpenBSD__)
-         if (std::getenv("XDG_CACHE_HOME")) {
-             cache_directory = std::getenv("XDG_CACHE_HOME");
-         } else {
Index: pkg/PLIST
===================================================================
RCS file: /cvs/ports/misc/llama.cpp/pkg/PLIST,v
diff -u -p -r1.4 PLIST
--- pkg/PLIST	15 May 2025 01:38:55 -0000	1.4
+++ pkg/PLIST	11 Sep 2025 13:03:29 -0000
@@ -5,6 +5,7 @@ bin/convert_hf_to_gguf.py
 @bin bin/llama-cli
 @bin bin/llama-convert-llama2c-to-ggml
 @bin bin/llama-cvector-generator
+@bin bin/llama-diffusion-cli
 @bin bin/llama-embedding
 @bin bin/llama-eval-callback
 @bin bin/llama-export-lora
@@ -15,6 +16,7 @@ bin/convert_hf_to_gguf.py
 @bin bin/llama-gguf-split
 @bin bin/llama-gritlm
 @bin bin/llama-imatrix
+@bin bin/llama-logits
 @bin bin/llama-lookahead
 @bin bin/llama-lookup
 @bin bin/llama-lookup-create
@@ -35,7 +37,6 @@ bin/convert_hf_to_gguf.py
 @bin bin/llama-speculative-simple
 @bin bin/llama-tokenize
 @bin bin/llama-tts
-@bin bin/vulkan-shaders-gen
 include/ggml-alloc.h
 include/ggml-backend.h
 include/ggml-blas.h
@@ -43,16 +44,18 @@ include/ggml-cann.h
 include/ggml-cpp.h
 include/ggml-cpu.h
 include/ggml-cuda.h
-include/ggml-kompute.h
 include/ggml-metal.h
 include/ggml-opt.h
 include/ggml-rpc.h
 include/ggml-sycl.h
 include/ggml-vulkan.h
+include/ggml-webgpu.h
 include/ggml.h
 include/gguf.h
 include/llama-cpp.h
 include/llama.h
+include/mtmd-helper.h
+include/mtmd.h
 lib/cmake/ggml/
 lib/cmake/ggml/ggml-config.cmake
 lib/cmake/ggml/ggml-version.cmake
@@ -64,5 +67,5 @@ lib/cmake/llama/llama-version.cmake
 @lib lib/libggml-vulkan.so.${LIBggml-vulkan_VERSION}
 @lib lib/libggml.so.${LIBggml_VERSION}
 @lib lib/libllama.so.${LIBllama_VERSION}
-@lib lib/libmtmd_shared.so.${LIBmtmd_shared_VERSION}
+@lib lib/libmtmd.so.${LIBmtmd_VERSION}
 lib/pkgconfig/llama.pc