[Tools] SLR - Offline JP to EN Translation for RPG Maker VX, VX Ace, MV, MZ, and Pictures

It's a new all-things model that google has created to implement on chrome, relying to be the substitute of the old tools of Chrome for things like summarization and translation.

Official Google doc here: https://developer.android.com/ai/gemini-nano

Also, I tried the chatgpt endpoint before, all I got was frustration trying to configure that
I'm just going to be honest, looking at this I'm pretty lost.
I have no idea how someone is supposed to use that.
:gokiko_nooo:
 
Finally finished a serious offline DSLR test with the new models.
This is only for someone with bad hardware. If you have a high-end rig the tests and conclusion do not apply to you at all.

On a 9 year old PC that was expensive back then, but never received any hardware upgrade.
250000 characters of text, that has a lot of @1 placeholders \n[] commands and \c[] commands.

Test 1
Primary model Qwen3 VL 8B Instruct abliterated v2 I1 (Q5_K_S)
Fallback model Gemma3 27B abliterated dpo I1 (Q4_K_S)

The translation took ~8.5 hours. Not a single complete failure.

Test 2
Primary model Qwen3 VL 4B Instruct abliterated v1 (Q8_0)
Fallback model Qwen3 VL 8B Instruct abliterated v2 (Q8_0)

The translation took ~3.5 hours. 7 cells failed completely and need manual fixing.


Obervations:
Test 1 was a failure on this hardware.
The Primary model actually screwed up a lot constantly requiring the large fallback model to step in which is why it took so long.
8.5 hours for a mediocre translation of a medium sized game is not acceptable.
If you think about the monetary value of the power consumption, wear, and it simply blocking the PC, you could pay deepseek instead and have a faster better translation.

Test 2 was also a failure, although a less bad one.
3.5 hours is still really long considering it had complete failures.
The Primary model actually performed basically the same as the primary in the first test.
The fallback model was too big to offload into the old GPU and as a result pretty slow and did not have a particularly good rate at fixing stuff. Meaning on those 7 complete failures it just wasted a really long time failing.

Current Conclusion:
If you have outdated hardware I would use the biggest abliterated Qwen3 model you can fully fit in your GPU as a primary model and then use DeepSeekV3-0324 as fallback option with the free requests on OpenRouter.
That would be quite fast, free, you would not have a single complete failure, and if the game you are translating has a lot of \c[] commands and stuff like that you will likely still get a significantly better translation than with SugoiV4 based SLR.

If you absolutely want to keep it offline I would honestly just turn the fallback option off and once it finishes manually fix the cells with the TRANSLATIONFAILURE error code (Worst case just run normal SLR on them). That would be so much faster than trying to make a huge model do it for you.

If none of the models fit in your GPU, stick to SLR, it's not worth it.
 
  • Like
Reactions: hamid
I've released v2.030.
Most notable change: frequency_penalty, presence_penalty, stream, and chat_template_kwargs parameter options to DSLR and parameters are optional now.
(If you leave them blank or use 69 if they are a number, then they will not be sent to the endpoint at all.)


Changes since last changelog post:

2.030
Added frequency_penalty, presence_penalty, stream, and chat_template_kwargs options to DSLR.
Added prompt options for fallback model.
Parameters that are set to either be empty or 69 will no longer be sent to the endpoint.

2.029
When talking to the LLM about text commands DSLR will now use the correct capitalization of the original.
Changed DSLR default settings.
Changed DSLR documentation.
Changed DSLR FullBatch max_tokens calculation to be sustantially lower.
Added better error handling for '400 Bad Request' server errors.

2.028
Added more DSLR options for the fallback model. Fixed the existing ones.
Added adjustments to Temperature and Top K parameters during the last attempts.
If a batch would fail completely because even the fallback model ran out of tries, or it's not even enabled, then it will now accept the translation anyway, but add SHISAYE.FAILEDTRANSLATION into it.
Changed default max tokens to 4000.

2.027
Added FallBack LLM options to DSLR.
Removed outdated information from the documentation.
 
I've released v2.032.
To further address the weak hardware DSLR problem I've implemented a new option to limit the fallback attempts to 5 single requests.
Meaning instead of retrying the entire batch it will really only use the fallback model for the current failed translation.
That makes using large models with weak hardware much more viable because it will at worst only make 5 relatively quick attempts and not waste 2 hours.

But that new option would be a terrible idea for a fallback model using some kind of free limited requests/tokens, because it will spam small inefficient requests.
It's only a good idea for something unlimited and free. (If you host the model yourself.)


Changes since last changelog post:

2.032
Fixed it not sending the starter of the context prompt in DSLR.
Changed single request retries to 10 instead of 12.
Added new "Limit Fallback to Single Requests" option.
When enabled it will try 5 more times with the fallback model during single requests instead of the fallback model trying the entire batch again.
It will not retry the batch if those 5 attempts fail.

2.031
Implemented the DSLR endpoint parameter option changes for SEP.
Fixed some wrong text.
 
  • Like
Reactions: hamid
Some clarification regarding the latest update.

The "Prepare for Batch Translation" button only needs to be pressed once after creating a new project. You do not need to press it again, it wont do anything negative, but nothing positive either.

The new system determines whether or not you ever pressed the button by a small addition to the cache files and will press the button for you if you try to start a batch translation without it.
That means on old projects it will assume that you've never pressed it until it placed the new information, which is a bit annoying, but wont actually do anything negative to your project (but waste your time).
You can turn this whole deal off in the options menu.

Was this really necessary? Apparently so. Some people will rather take the effort of writing a long message shitting on a project, than read some basic instructions.
RTFM.jpg
 
I've made a translation for RJ281598 using DeepSeekV3.2EXP as a test because that's weirdly enough currently the cheapest model, but I can't really recommend it.
When it worked it worked good, but in less than 200k characters it shit the bed 6 times and just replied with "A" or "/" when asked for a full batch.
Not a horrible error to get since obviously it wont really bill you a lot of output tokens for that, but DeepSeekV3-0324 never did that.

I did not test the proper 3.2, yet. (Came out today.)
I also never tried 3.1 Terminus, yet. The normal 3.1 is pretty bad.
 
Finished tests with basically all DeepSeek versions now.
My Impressions are as follows:

DeepSeekV3 - Decent - Not great not terrible.

DeepSeekV3-0324: Good - Follows instructions very well.

DeepSeekV3.1: Bad - Does not follow instructions, constantly screws up

DeepSeekV3.1-Terminus: Good - Basically the same as V3-0324

DeepSeekV3.2-EXP: Bad - Hangs up spamming the same letter quite often

DeepSeekV3.2: Not great - Hangs up less than 3.2EXP, but it still happens

I did not test 3.1-Exacto or 3.2-Special because they are reasoning models with really weird output, that is currently not supported by DSLR.

Translation quality for all of them is pretty bland/sterile and it prefers to take things in a non lewd way, but the tests were done on 0.1 temperature to give it the best chance to follow instructions, so that's not particularly surprising.


TLDR:
I still only really recommend V3-0324, which is also the model I've tested the most, but if you have a better provider for V3.1-Terminus then go with that, since there doesn't seem to be a whole lot of difference.
 
  • Like
Reactions: Entai2965
I'm just going to be honest, looking at this I'm pretty lost.
I have no idea how someone is supposed to use that.
:gokiko_nooo:
yeah, i don't know exaclty how Gemini Nano was placed in Translator++, but with the new chrome versions it works very well for translations, like a Local AI tranlation
 
yeah, i don't know exaclty how Gemini Nano was placed in Translator++, but with the new chrome versions it works very well for translations, like a Local AI tranlation
If it's a free addon for T++ I might be able to integrate it, but SLR Translator's UI is based on a really old version of T++ so I doubt it's a simple drop in, and if it's one of the premium addons it's probably riddled with DRM. Dreamsavior really loves DRM in his supposedly GPLv3 licensed project.

For local AI translations I'm currently using Qwen3 VL 32B abliterated v1, and there's also 8B and 4B versions that still work fairly well. (As long as you use Q6 or higher)
I just run them with LM Studio. DSLR can then talk to that like any other openai endpoint.

I couldn't tell you how they actually compare with Gemini Nano though, because I still have basically no data from that thing.
 
Bad news. Seems like OpenRouter no longer offers free requests to premium models. You can now only get free requests for stuff you could host yourself or is pretty shit.
No DeepSeekV3.X at all anymore either.

As a result OpenRouter is now basically worthless for DSLR, unless you're planning to pay money.
And even that is a bit "meh" because paying a provider directly is probably cheaper, and OpenRouter credits expire after a year, which is a bit bullshit.
 
If it's a free addon for T++ I might be able to integrate it, but SLR Translator's UI is based on a really old version of T++ so I doubt it's a simple drop in, and if it's one of the premium addons it's probably riddled with DRM. Dreamsavior really loves DRM in his supposedly GPLv3 licensed project.

For local AI translations I'm currently using Qwen3 VL 32B abliterated v1, and there's also 8B and 4B versions that still work fairly well. (As long as you use Q6 or higher)
I just run them with LM Studio. DSLR can then talk to that like any other openai endpoint.

I couldn't tell you how they actually compare with Gemini Nano though, because I still have basically no data from that thing.
I Actually have tried both Qwen AI with Lm Studio, they didn't translated for me, and tried with Gemini Nano.

A Simple RPGmaker game(without much text), can be in between 2-5 minutes. For a fairly large game, 25-40 minutes, and a huge game, one hour to more.

Gemini Nano was built to run even in a potato of a pc, but when more powerful your pc is, better response this engine has.


I don't know about plugins on your application, but if you want i can try to create a plugin to integrate for you
 
I Actually have tried both Qwen AI with Lm Studio, they didn't translated for me, and tried with Gemini Nano.
That's confusing. As far as my tests with "normal" LLMs went, the abliterated versions of qwen3vl are by far the best local models available.
How did you use them and at what quant?
They perform pretty bad if you use a quantization below Q6.
And of course running them on CPU is dreadfully slow, they need to be offloaded to VRAM to have reasonable speed. (But even the 4B versions aren't thaaat bad.)

If you want, give me an example project, I'll translate it with qwen3vl and then you can compare it with your nano results.
I don't know about plugins on your application, but if you want i can try to create a plugin to integrate for you
If you want to make something, go ahead.
But I'm not entirely sure if it's worth the effort, only do it if you would use it yourself.
Personally I'm translating stuff with DeepSeekV3.1-terminus, but of course that's not free.

It would need to be fed through the SLRtrans wrapper/escaper system or it will screw up scripts.
You can find examples of how that should look like in the www>addons>SLRtrans>lib>TranslationEngine folder.
 

Users who are viewing this thread

Latest profile posts

Irish99 wrote on Ryzen111's profile.
Hi if you get the time can you please reupload RJ01325238
chd114 wrote on Shine's profile.
Could you reupload this game?
[Japanese] [200112][苗木堂] おキツネちゃんとお社つくろ! (Ver1.1) [RJ275232]
https://www.anime-sharing.com/threads/910913/
爱游戏体育成立于2020年,是全球游戏服务商也是最大的中文线上娱乐平台。爱游戏积极塑造和维护安全稳定、公平公正的行业环境。长期坚持技术研发投入,精益求精,研发部门超过30个。总游戏数量已超过1000,全面包含体育、电竞、真人、彩票、百家乐、德州扑克等热门游戏场馆。

Website: https://vipayx.com
2026 WorldCup: https://2026worldcup.bet
Email: [email protected]
TAG: #爱游戏 #爱游戏体育 #爱游戏平台 #爱游戏体育官网 #爱游戏体育APP#爱游戏#爱游戏体育#爱游戏平台#爱游戏体育官网#爱游戏体育APP
Hking123 wrote on liren1st's profile.
hey!
if it's not a bother can you upload the link for RJ01424075 once more?