This is a proxy server service for JanitorAI to bypass CORS error of strict LLM APIs like NVIDIA NIM or Vercel AI Gateway. Available on WebAPI and Google Colab
Sometimes faster. If the WebAPI service isn't available.
https://colab.research.google.com/drive/1DJtdr5KjpmEf01riSRQxMnNqTc_sn8U9#scrollTo=0kID1kkHPqOx
General Information:
Steps to use:
Go to https://build.nvidia.com/, create an account and get the API key.
Choose the model you use.
deepseek-ai/deepseek-v3.1
Model Page
qwen/qwen3-235b-a22b
Model Page
mistralai/mistral-medium-3-instruct
Model Page
openai/gpt-oss-120b
(censored)
Model Page
moonshotai/kimi-k2-instruct
Model Page
Go to JanitorAI, add a new proxy configuration.
Fill out the settings.
https://jproxy.onrender.com/proxy?url=https://integrate.api.nvidia.com/v1
?url=
, and
/chat/completions
is not neededChoose options. (optional)
reasoning=force
<think>
section will be visible. If you're using models with both reasoning and
non-reasoning modes like DeepSeek-V3.1, this option forces them to enable reasoning. To use it, add
&reasoning=force
to the end of the Proxy URL.
https://jproxy.onrender.com/proxy?url=https://integrate.api.nvidia.com/v1&reasoning=force
reasoning=visible
<think>
section will be visible. Unlike reasoning=force
, this option
doesn't force to enable reasoning. To use it, add
&reasoning=visible
to the end of the Proxy URL.https://jproxy.onrender.com/proxy?url=https://integrate.api.nvidia.com/v1&reasoning=visible
Steps to use:
Go to https://vercel.com/, create an account and get the API key.
Choose the model you use.
deepseek/deepseek-v3.1
Model Page
google/gemini-2.5-pro
Model Page
anthropic/claude-opus-4
Model Page
xai/grok-4
Model Page
Go to JanitorAI, set up a new Proxy configuration.
https://jproxy.onrender.com/proxy?url=https://ai-gateway.vercel.sh/v1
/chat/completions
.You can use other Proxy services as well. All you need is to use the URL: https://jproxy.onrender.com/proxy?url=<Proxy service's URL>
Thank you for using!