How to change llama_init_from_model: n_ctx = 4096 in AI Navigator?

Hello, I’m using several models at AI Navigator, but this configuration settings always appear in logs: llama_init_from_model: n_ctx = 4096 and limit the token number. How to change this settings? Thanks.