DominoIQ always report error message when configuring llm local mode

----------------------
**Domino/Notes Version:14.5 
**Add-on Product (if appropriate, e.g. Verse / Traveler / Nomad / Domino REST API): DominoIQ
**Its Version:Llama-3.2-3B-Instruct-Q3_K_L.gguf
**Operating System:Windows
**Client (Notes, Nomad Web, Nomad Mobile, Android/iOS, browser version): NA

----------------------

Problem/Query:
I followed the guide --https://help.hcl-software.com/domino/14.5.0/admin/conf_download_llm.html, deployed DominoIQ with local mode ( Llama-3.2-3B-Instruct-Q3_K_L.gguf,), but domino console always report error: 
----------------------------------------------------------------------------------------
79CC:0027-4894] DominoIQTask : CreateProcess succeeded, PID = 4864
[79CC:0027-4894] DominoIQTask: checking status of llama-server...
[79CC:0027-4894] DominoIQTask:  bProcessExists = 1, error = No error
DominoIQTask: checking status of llama-server...
[79CC:0027-4894] DominoIQTask:  bProcessExists = 1, error = No error
[42F4:0002-6268] 2025/09/26 17:05:31   XSP Command Manager initialized
[42F4:0002-6268] 2025/09/26 17:05:31.07 LLMInit> Using servers from directory profile. First server: CN=LP2-AP-51723559/O=HCLPNP
[42F4:0002-6268] 2025/09/26 17:05:31.07 LLMInit> Initialized LLM caches: No error
[42F4:0002-6268] 2025/09/26 17:05:31.07 LLMInit> Staticmem init: done
DominoIQTask: checking status of llama-server...
[79CC:0027-4894] DominoIQTask:  bProcessExists = 1, error = No error
[42F4:0002-6268] 2025/09/26 17:05:33   HTTP Server: Started
DominoIQTask: checking status of llama-server...
[79CC:0027-4894] DominoIQTask:  bProcessExists = 0, error = No error
[79CC:0027-4894] LLMProcMemSetReadyForBusinessFlag - bReadyForBusiness was already set to 0
[79CC:0027-4894] 2025/09/26 17:05:37.40 loadStringItem> Error getting item info for provider_APIKey: Note item not found
[79CC:0027-4894] 2025/09/26 17:05:37.40 LLMGetLaunchParameters> Unable to find account CN=LP2-AP-51723559/O=HCLPNP because Note item not found
[79CC:0027-4894] 2025/09/26 17:05:37.40 LLMGetLaunchParameters>  Returning error = 
[79CC:0027-4894] 2025/09/26 17:05:37   DominoIQTask : Domino IQ AI engine terminated unexpectedly.
[79CC:0027-4894] 2025/09/26 17:05:37   DominoIQTask : Error retrieving server configuration in config database, enable debug with DEBUG_DOMIQ=1: Note item not found
--------------------------------------------------------------------------------------------
dominoiq_server.log shows
----------------------------------------------------------------------------------------
Running without SSL
main: HTTP server is listening, hostname: 127.0.0.1, port: 8080, http threads: 19
main: loading model
srv    load_model: loading model 'models/7B/ggml-model-f16.gguf'
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX A2000 Laptop GPU) - 3292 MiB free
gguf_init_from_file: failed to open GGUF file 'models/7B/ggml-model-f16.gguf'
llama_model_load: error loading model: llama_model_loader: failed to load model from models/7B/ggml-model-f16.gguf

Shuqiang,
You mentioned that you deployed local mode “Llama-3.2-3B-Instruct-Q3_K_L.gguf” but it is trying to load model ‘models/7B/ggml-model-f16.gguf’ and failed.
Would you please share screen shots of screen shots of Model document and Configuration document in Dominoiq.nsf?

Here is the log in my test environment:
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:

  • Device 0: NVIDIA T1200 Laptop GPU, compute capability 7.5, VMM: yes*
    build: 4969 (3dd80d17) with MSVC 19.34.31948.0 for x64
    system info: n_threads = 8, n_threads_batch = 8, total_threads = 16

*system_info: n_threads = 8 (n_threads_batch = 8) / 16 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | *

Running without SSL
Web UI is disabled
main: HTTP server is listening, hostname: 127.0.0.1, port: 8080, http threads: 22
main: loading model
srv load_model: loading model ‘d:\Domino\Data\llm_models\Llama-3.2-3B-Instruct-Q4_K_M.gguf’

Best Regards,
Xiaoyun

It seems that my Dominoiq.nsf has some issue, delete it and let Dominoiq task recreated one, then Dominoiq works successfully. thanks!