Index | Thread | Search

From:
Stuart Henderson <stu@spacehopper.org>
Subject:
Re: [NEW] graphics/stable-diffusion.cpp
To:
Volker Schlecht <openbsd-ports@schlecht.dev>, "Kirill A. Korinsky" <kirill@korins.ky>, ports <ports@openbsd.org>
Date:
Tue, 03 Feb 2026 19:22:02 +0000

Download raw body.

Thread
how about building with vendored libggml for now and switch to the shared 
one if they add plugin back-end support?

-- 
  Sent from a phone, apologies for poor formatting.

On 3 February 2026 18:11:42 Volker Schlecht <openbsd-ports@schlecht.dev> wrote:

> On 2/3/26 6:56 PM, Kirill A. Korinsky wrote:
>> On Tue, 03 Feb 2026 18:45:00 +0100,
>> Volker Schlecht <openbsd-ports@schlecht.dev> wrote:
>>>
>>> While it *is* based on libggml, the sd-cpp ggml is built with
>>> GGML_MAX_NAME=128, so we can't use devel/libggml from ports.
>>>
>>> Likewise, we can't dynamically select the backend as in llama.cpp,
>>> hence the -vulkan FLAVOR.
>>>
>>
>> Two remarks:
>>
>> 1. I've tried to rebuild llama.cpp and whisper.cpp against ggml with
>>     GGML_MAX_NAME=128 and it works, so at least this isn't a blocker.
>>
>> 2. We still can use global libggml, but we should manually link the backend
>>     to link vulcan or which cpu.
> Cool. I still wonder if that isn't more trouble than it's worth to save
> what, ~2 MB on the binary size for something that requires a
> multi-gigabyte-sized diffusion model to be useful (for very small
> values of 'useful' :-))