I Actually have tried both Qwen AI with Lm Studio, they didn't translated for me, and tried with Gemini Nano.
That's confusing. As far as my tests with "normal" LLMs went, the abliterated versions of qwen3vl are by far the best local models available.
How did you use them and at what quant?
They perform pretty bad if you use a quantization below Q6.
And of course running them on CPU is dreadfully slow, they need to be offloaded to VRAM to have reasonable speed. (But even the 4B versions aren't thaaat bad.)
If you want, give me an example project, I'll translate it with qwen3vl and then you can compare it with your nano results.
I don't know about plugins on your application, but if you want i can try to create a plugin to integrate for you
If you want to make something, go ahead.
But I'm not entirely sure if it's worth the effort, only do it if you would use it yourself.
Personally I'm translating stuff with DeepSeekV3.1-terminus, but of course that's not free.
It would need to be fed through the SLRtrans wrapper/escaper system or it will screw up scripts.
You can find examples of how that should look like in the www>addons>SLRtrans>lib>TranslationEngine folder.