Index | Thread | Search

From:
Kirill A. Korinsky <kirill@korins.ky>
Subject:
Re: [NEW] graphics/stable-diffusion.cpp
To:
Volker Schlecht <openbsd-ports@schlecht.dev>
Cc:
ports <ports@openbsd.org>
Date:
Tue, 03 Feb 2026 18:56:22 +0100

Download raw body.

Thread
On Tue, 03 Feb 2026 18:45:00 +0100,
Volker Schlecht <openbsd-ports@schlecht.dev> wrote:
> 
> While it *is* based on libggml, the sd-cpp ggml is built with
> GGML_MAX_NAME=128, so we can't use devel/libggml from ports.
> 
> Likewise, we can't dynamically select the backend as in llama.cpp,
> hence the -vulkan FLAVOR.
> 

Two remarks:

1. I've tried to rebuild llama.cpp and whisper.cpp against ggml with
   GGML_MAX_NAME=128 and it works, so at least this isn't a blocker.

2. We still can use global libggml, but we should manually link the backend
   to link vulcan or which cpu.

Probably this should work:

CXXFLAGS +=		-Wl,-L${LOCALBASE}/lib \
			-Wno-unused-command-line-argument

.if ${FLAVOR:Mvulkan}
.if ${MACHINE_ARCH:Mamd64} || ${MACHINE_ARCH:Maarch64}
CXXFLAGS += 		-Wl,-lggml-vulkan
CONFIGURE_ARGS +=	-DSD_VULKAN=on
.endif
.else
.if ${MACHINE_ARCH:Mamd64}
CXXFLAGS += 		-Wl,-lggml-cpu-x64
.else
CXXFLAGS += 		-Wl,-lggml-cpu
.endif

.endif


and used patch to increase GGML_MAX_NAME:

Index: include/ggml.h
--- include/ggml.h.orig
+++ include/ggml.h
@@ -226,7 +226,7 @@
 #define GGML_MAX_OP_PARAMS      64
 
 #ifndef GGML_MAX_NAME
-#   define GGML_MAX_NAME        64
+#   define GGML_MAX_NAME        128
 #endif
 
 #define GGML_DEFAULT_N_THREADS  4



-- 
wbr, Kirill