Download raw body.
NEW: devel/codex
On Fri, 16 Jan 2026 05:07:01 +0100,
Chris Cappuccio <chris@nmedia.net> wrote:
>
> This is the Codex LLM agent from OpenAI.
>
> It can perform actions to use tools within the filesystem to aid in a
> variety of actions, particularly coding-related activities.
>
> The pipeline of llama.cpp + codex enables an entirely native OpenBSD
> based, AMDGPU accelerated LLM inference agent system.
>
> This is one of the last versions of Codex that will support the "Chat
> Completions" API which is currently served through llama.cpp. (Future
> Codex versions will require "Responses" API support in llama.cpp.)
I see PR for llama.cpp to supports this API, so, probably it will be
released soon: https://github.com/ggml-org/llama.cpp/pull/18486
I haven't fully read your port, just take a vary fast look and one things
raise fast suggestion, I think this way it's cleaner:
DIST_TUPLE = github openai codex rust-v${V} .
DIST_TUPLE += github nornagon crossterm ${CROSSTERM_COMMIT} crossterm
DIST_TUPLE += github nornagon ratatui ${RATATUI_COMMIT} ratatui
--
wbr, Kirill
NEW: devel/codex