Download raw body.
Update llama.cpp
On Thu, 11 Sep 2025 17:52:07 +0200,
Percy Piper <piper.percy@googlemail.com> wrote:
>
> This update brings some nice improvements (e.g. gpt-oss support[1]).
>
> However, I am unable to resolve a blocking issue. During the build
> -I/usr/local/include appears before the source's include dirs, so any
> previously installed headers are found before those in the sources,
> causing the build to fail.
>
> If the old package is uninstalled first, the port will build, package
> and run fine.
>
> This seems likely to be a common scenario and I'm embarrased I haven't
> figured out how to solve it.
>
> I had hoped to get this in shape before 7.8 so welcome any suggestions
> or hints on how this might be solved?
>
Have you tried add BEFORE in target_include_directories and
include_directories? Like this:
Index: src/CMakeLists.txt
--- src/CMakeLists.txt.orig
+++ src/CMakeLists.txt
@@ -37,8 +37,8 @@ add_library(llama
unicode.h
)
-target_include_directories(llama PRIVATE .)
-target_include_directories(llama PUBLIC ../include)
+target_include_directories(llama BEFORE PRIVATE .)
+target_include_directories(llama BEFORE PUBLIC ../include)
target_compile_features (llama PRIVATE cxx_std_17) # don't bump
target_link_libraries(llama PUBLIC ggml)
maybe it needed to add AFTER for other include dirs as well.
--
wbr, Kirill
Update llama.cpp