Announcement

Collapse

Please use the Hentai ID thread for all hentai ID requests. Click me for link!

The Identification Thread is Here:

http://www.hongfire.com/forum/showthread.php/447081
See more
See less

Translation Aggregator

Collapse
This is a sticky topic.
X
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Translation Aggregator

    I'm no longer working on Translation Aggregator, but Setx has released an updated version, here. The files attached directly to this post are now outdated

    Translation Aggregator basically works like ATLAS, with support for using a number of website translators and ATLAS simultaneously. It was designed to replace ATLAS's interface as well as add support for getting translations from a few additional sources. Currently, it has support for getting translations from Atlas V13 or V14 (Don't need to have Atlas running), Google, Honyaku, Babel Fish, FreeTranslations.com, Excite, OCN, a word-by-word breakdown from WWWJDIC, MeCab, which converts Kanji to Katakana, and its own built-in Japanese parser (JParser). I picked websites based primarily on what I use and how easy it was to figure out their translation request format. I'm open to adding more, but some of the other sites (Like Word Lingo) seem to go to some effort to make this difficult.

    JParser requires edict2 (Or edict) in the dictionaries directory, and supports multiple dictionaries in there at once. It does not support jmdict. You can also stick enamdict in the directory and it'll detect some names as well, though the name list will be heavily filtered to avoid swamping out other hits. If you have MeCab installed, JParser can use it to significantly improve its results. TA can also look up definitions for MeCab output as well, if a dictionary is installed. In general, MeCab makes fewer mistakes, but JParser handles compound words better, and groups verb conjugations with the verb rather than treating them as separate words.

    TA also includes the ability to launch Japanese apps with Japanese locale settings, automatically inject AGTH into them, and inject its own dll into Japanese apps. Its dll can also translate their menus and dialogs using the ATLAS module (Requires you have ATLAS installed, of course). Versions 0.4.0 and later also include a text hooking engine modeled after AGTH. The menu translation option attempts to translate Windows-managed in-game menus, and is AGTH compatible. The AGTH exe and dlls must be in the Translation Aggregator directory for it to be able to inject AGTH into a process. AGTH is included with the most recent versions of TA.

    The interface is pretty simple, much like ATLAS: Just paste text into the upper left window, and either press the double arrow button to run it through all translators, or press the arrow buttons for individual translation apps. Each algorithm is only run once at a time, so if a window is busy when you tell it to translate something, it'll queue it up if it's a remote request, or stop and rerun it for local algorithms. If you have clipboard monitoring enabled (The untranslated text clipboard button disables it altogether), it'll run any clipboard text with Japanese characters copied from any other app through all translators with clipboard monitoring enabled. I won't automatically submit text with over 500 characters to any of the translation websites, so you can skip forward in agth without flooding servers, in theory. I still don't recommend automatic clipboard translation for the website translators, however.

    To assign a hotkey to the current window layout, press shift-alt-#. Press alt-# to restore the layout. Bound hotkeys will automatically include the current transparency, window frame, and toobar states. If you don't want a bound hotkey to affect one or more of those states, then you can remove the first 1 to 3 entries in the associated line in the ini file. Only modify the ini yourself when the program isn't running. All other values in those lines are mandatory.

    Pre-translation substitutions modify input text before it's sent to any translator. Currently applies to websites, ATLAS, Mecab, and JParser. There's a list of universal replacements ("*") and replacements for every launch profile you've created. I pick which set(s) of substitutions to use based on currently running apps. Note that you do not need to be running AGTH or even have launched a game through TA's launch interface for the game to be detected, but you do need to create a launch profile. May allow you to just drag and drop exes onto the dialog in the future.

    MeCab is a free program that separates words and gives their pronunciation and part of speech. I use it to get the information needed to parse words and display furigana. If you have MeCab installed but I report I'm having trouble initializing it, you can try copying libmecab.dll to the same directory as this program. Do not install MeCab using a UTF16 dictionary, as I have no idea how to talk to it (UTF16 strings don't seem to work). Instead, configure MeCab to use UTF8, Shift-JIS, or EUC-JP. If you have both MeCab and edict/edict2 installed, you can view a word's translation in MeCab by hovering the mouse over it. Also, JParser can use MeCab to help in parsing sentences.

    JParser tends to be a better choice for those who know almost no Japanese - it tells you how verbs are conjugated, handles some expressions, etc. MeCab may well be the better choice for those who know some Japanese, however.

    Source, attached below, is available under the GPL v2.

    Thanks to (In alphabetical order, sorry if I'm leaving anyone out):
    Hongfire Members:
    Freaka for his innumerable feature suggestions and reported issues over the course of development.
    Setsumi for TA Helper and for all his suggested improvements and reported issues, particularly with JParser.
    Setx for AGTH.
    Stomp for fixing the open file dialog not working properly on some systems and adding the tooltip font dialog, and fixing a bug that required admin privileges when certain other software was installed.
    Might sound like minor contributions, but feedback really drives the development of TA.

    Non-members:
    KingMike of KingMike's Translations, who is apparently the creator of the EUC-JP table I used to generate my own conversion table.
    Nasser R. Rowhani for his function hooking code.
    Z0mbie for writing the opcode length detector/disassembler I use for hooking. Apparently was intended for virus-related use, but works fine for other things, too.
    And the creators and maintainers of edict, MeCab, and zlib.

    You might also be interested in:
    *Setsumi's TA Helper and AGTHGrab.
    *errotzol's replacements script.
    *Devocalypse's devOSD.
    *kaosu's ITH (Like AGTH. No direct TA support, due to lack of a command line interface, but definitely worth checking out).

    MeCab
    edict2

    Changelog:
    0.4.9
    * Fixed MeCab/JParser getting stuck when starting a new translation before the last is fixed.
    * Fixed interface lockup while mousing over an item in MeCab while JParser is running.
    * Menu translation will now translate column headings in ListViews (Needed this for the AA launcher)
    * Fixed ATLAS config crash.
    * Global hotkey support. Toggle under "File" menu (Tools is kinda big already). Currently only really supports history navigation. May add more later.

    0.4.8
    * Added history. Logs both original text and translations (For online translators). It logs up to 20 MB of original text, and whatever translations are associated with it. Currently only way to force a retranslation is to toggle one of several options (Autoreplace half-width characters, src/dest language, modify substitutions).
    * Fixed deadlock bug on MaCab mouse over while JParser is running.
    * Fix corrupting built-in text hooker settings when launch failed. Suspect no one uses this, anyways.
    * Drag/dropping an exe onto TA to open up the injection dialog now actives TA.

    0.4.7
    * JParser and MeCab each use their own thread (Mostly).
    * Changed conjugation table format to JSON - plan to do this to a lot of other files (Being careful not to mess up game settings or substitution tables). Currently have way too much file loading code.

    0.4.6
    * Fix WWWJDIC
    * Fix closing injection dialog
    * Updating process list 10+x faster
    * Process list autoupdates
    * Fixed bug that would result in injecting into wrong process when one program is running multiple times.
    * Updated included AGTH version

    0.4.5
    * Added bing support.
    * Updated Honyaku code (They didn't try and block TA, they just modified their HTML)
    * Fixed AGTH command line code.
    * Replaced "/GL" with "/SM" compile option, resulting in faster builds when one has a lot of cores.

    0.4.4
    * Regular expressions are now compiled
    * Injection validation when using addresses relative to dlls (Or function addresses in dlls) should be fixed.
    * Added option to create shortcuts. They'll launch TA (If it's not running) and try to launch the game using the current injection settings (Injection settings that you'd get at the launch screen - the current settings are not saved - it always uses the most recently used ones).
    * Appropriated some of Setsumi's code to make tooltips larger.

    0.4.3
    * Multiple subcontexts now supported. Separate them with semi-colons. AGTH code converter will add two subcontexts, when appropriate.
    * Using aliases for hooks added. Prefix a hook with "[Alias Text]" and that's what will be displayed on the context manager screen as the hook's name. Makes it easier to see context strings.
    * Locale selection added to injection dialog.
    * "Hook delay" added to injection dialog. Actually doesn't delay hooking, delays how long before hooks that use filtering based on calling function's dll are enabled. Generally only the default hooks do this. Increasing this delay may circumvent issues with games that crash when launched with AGTH, but work fine when injected after launching.
    * Added "!" and "~" operators.

    * Stomp's admin privilege fix when using some 3rd party software added.
    * Excite fixed
    * Fixed sanity testing for injection addresses, so when specify a dll or exe name in a text hook, shouldn't erroneously think it's an error when the module isn't loaded in the current address space.
    * Fixed some JParser dicrionary common word parsing, when using versions of edict with entL entries. Also changed treatment of Kanji entries when only their corresponding Hiragana are marked as common.
    * Fixed substitution matching Hiragana with Katakana and vice versa.
    * Fixed a clipboard-related crash bug.
    * Fixed hooks causing crashes when relocating call/jumps (Hopefully...)
    * Fixed AGTH repeat filter length placement (oops).

    0.4.2b
    * Fixed substitution loading/deleting.
    * Fixed << and >>.

    0.4.2
    * AGTH code conversion tool.
    * Injection code checker added.
    * New child process injection handler (Really nifty injection code for that...). Should be a little more robust than before.
    * Option not to inject into child processes added.
    * Auto copy to clipboard added.
    * Both extension filters fixed.
    * Both eternal repeat filters fixed/upgraded.
    * Phrase repeat filter fixed/upgraded.
    * OpenMP/MSVC 2008 SP1 runtime requirement removed
    * char/charBE fixed
    * GetGlyphOutline fixed
    * Copy to clipboard crash when auto translate disabled fixed.
    * Slightly improved dll injection error handling.

    0.4.1
    * More context/filter options.
    * Repeated phrase filter now handles cases where phrase is being extended by a couple characters each time (xxyxyz, etc). Extension filters no longer really needed, unless the repeat starts out too short.
    * Option to handle eternally looping text.
    * Option to ignore text without any Japanese characters.
    * Text which substitution rules reduce to nothing no longer overwrites translated text.
    * Log length limit added.
    * Options to manage default internal text hooks added.
    * Clipboard treated as a context. Its default settings should mirror the old handling.

    0.4.0
    * Added it's own text hooking engine. Probably still buggy.
    * Fixed excessive redrawing when a hidden furigana window had clipboard translation enabled.
    * Works with new, even more poorly formatted edict files.
    * Handles EUC_JP characters that Windows does not (Doesn't use them properly with WWWJDIC at the moment, however). Only really fixes loading edict files with those characters.
    * Fixed right clicking when full screen.
    * Fixed not checking auto Hiragana mode.
    * Less picky when reading MeCab output.
    Attached Files
    Last edited by ScumSuckingPig; 07-11-2015, 11:20 AM. Reason: Change download link, re-upload attachments upon request from Setx

  • Originally posted by ScumSuckingPig View Post
    Edit: On a side note, is "Lapanese" your typo, or your way of telling me I have a typo? Looks fine in my menu...
    Oh, my typo. I meant, if option is active there are no indent and no coloring of parentheses block at the beginning of new line. Example: かしら
    読んでみた。 読んでみようとした。 読めたらいいな、と思った。

    Comment


    • 0.3.4 released. Another pretty minor update.

      Has pretty much all the changes/fixes Setsumi suggested except for the change in JParser handling of substitutions (Note that if you add a name to the dictionary and have it in Katakana, and you use enamdict, you should get sorta the behavior described, assuming the online translators recognize the name in Katakana as well).

      Also added logging, another minor bugfix or two, and one or two bonus JParser formatting options. "Hide usage information" currently only hides the "/(P)" at the end of a lot of dictionary entries. Thinking I might make it hide the other usage information as well, but not sure (Dialect info and vulgar/colloquial/onomatopoeia code etc).
      Last edited by ScumSuckingPig; 03-18-2010, 11:07 PM.

      Comment


      • The word list in cjdict.txt are generated by combining three word lists listed below with further processing for compound word breaking. The frequency is generated with an iterative training against Google web corpora.
        * CC-CEDICT (Chinese)
        * Libtabe (Chinese)
        * IPADIC (Japanese)
        http://src.chromium.org/viewvc/chrom...itr/cjdict.txt
        File contents is like this:
        Code:
        何	60
        何かしら	103
        何かと	96
        何がな	243
        ...
        読んでみた。 読んでみようとした。 読めたらいいな、と思った。

        Comment


        • cjdict seems to lack specs. Without having any idea what a number actually means, not sure how useful it is. As I actually have to come up with some consistent score, this is a fairly significant issue.

          Also, IPADIC is what mecab uses, I believe. It has a lot fewer "words" than edict. In particular, it's missing a lot of "expressions", which would be an issue if using it with edict. Think this is the more significant issue. Also, without specs, unsure how, or if, it handles verb conjugations (Same goes for IPADIC, which probably has specs, but not in english). If it doesn't, might need to increase the frequency of verbs slightly.

          That having been said, that information could be rather useful.

          A parser that actually considers part-of-speech info and context within a sentence would really help, too, but unfortunately, I don't have a sufficient understanding of Japanese grammar to come up with one. Of course, if I considered more than the current and previous words, my current dynamic programming algorithm wouldn't work.

          Edit: Amusingly, this thread is now the 2nd google hit for "cjdict frequency" (without quotes), indicating there's not much info available on the dictionary.
          Last edited by ScumSuckingPig; 03-19-2010, 08:02 AM.

          Comment


          • Originally posted by Setsumi View Post
            http://src.chromium.org/viewvc/chrom...itr/cjdict.txt
            File contents is like this:
            Code:
            何	60
            何かしら	103
            何かと	96
            何がな	243
            ...
            Here's a comparison of the edict P tags and cjdict counts. Note that it is only for exact matches (No hiragana to katakana matches and vice versa). First column is cjdict numbers, next two are counts of cjdict entries the correspond to entries without a (P) label and those that do, respectively. If one cjdict entry corresponds to edict entries both with and without (P) tags, it's included as a P hit, of course. One file includes conjugated verb hits, other does not. edict entries with no corresponding cjdict entry are not included.

            While there is a clear correlation, it's not nearly as strong as I'd like. The entry at 45 without a P-tag is "在". jmdict entry has no increased frequency indicator for it at all (jmdict has more info than edict, so some entries with no edict annotation can have one in jmdict). Not sure if the word is just a lot more common in Chinese, or if the verb "在る", which does have a P tag, is included in the count, or what. I could put together a list of some of the disagreements, if anyone's interested. I'm not even remotely qualified to form an opinion of them.

            The number of disagreements seems significant enough that I'm not sure cjdict is going to be at all useful, except perhaps as a tie breaker.

            Edit: Suppose it could also be the case that edict's annotations are much worse than I thought. Would be nice to have a test set with correct answers. I know wwwjdic kinda sorta has some downloadable examples. Maybe could try those.
            Attached Files
            Last edited by ScumSuckingPig; 03-20-2010, 10:59 PM. Reason: non-P/P column description flipped

            Comment


            • Originally posted by ScumSuckingPig View Post
              Not sure if the word is just a lot more common in Chinese, or if the verb "在る", which does have a P tag, is included in the count, or what.
              I'm thinking similar, what kanji-only words can't be trusted because of Chinese, like this:
              在 45 - "country" (frequency influenced by Chinese)
              ある 50 - "to be" (frequent)
              在る 237 - "to be" (rare)
              This looks plausible because "在る [ある]" tagged with (uk) - "usually written with kana".
              Last edited by Setsumi; 03-20-2010, 11:16 PM.
              読んでみた。 読んでみようとした。 読めたらいいな、と思った。

              Comment


              • What the heck? "Usually written as Kana..." (Which is clearly true, it's a word that I actually recognize instantly in kana, and I don't in Kanji), but the edict entry is:

                在る(P);有る(P) [ある] /..../(P)

                Which actually means that the 2 Kanji spellings are considered common and the kana is not, which is untrue. Jmdict has all the entries as common, but is annotated funkily. ある has two different frequency values from the same body of text...

                ok, I'm now convinced that I can't rely on edict's frequency information in the slightest. I also don't think cdict is a great replacement for its frequency information because of the Chinese thing, and its possible lack of frequent phrases included in edict.

                So now I'm not sure how to improve my frequency information. Would take a huge set of widely varied Japanese text to create my own counts, unfortunately. Also wouldn't handle words with some identical spellings, though that's only an issue in terms of what furigana I display and definition order, though it could mess up verb frequency counts as well.

                Suppose I could do a couple hundred thousand google queries, but... I think that might make google mad, and it doesn't really handle words that can be parts of other words, though might be able to subtract those out...
                Last edited by ScumSuckingPig; 03-20-2010, 11:54 PM.

                Comment


                • Playing around a bit...I think the best option may just be to add an option to use mecab to help... Prefer words that don't cross any mecab boundary and have the same pronounciation as mecab results. Latter will be a bit of a pain, methinks, since I currently pretty much separate parsing and picking the pronounciation.

                  What I really don't like about mecab is it splits up conjugated verbs and has a much smaller dictionary, so doesn't recognize a huge number of words in edict. It generally parses those words as multiple small words that, when combined, make up a longer word, however. Of course, there are also cases where I incorrectly combine words, but figuring out where would require a lot of fooling with the data mecab returns...
                  Last edited by ScumSuckingPig; 03-21-2010, 12:41 AM.

                  Comment


                  • Mecab stopped working when i replaced .2.96 with .3.4
                    Yesterday never has been,
                    Tomorrow will never come,
                    Today is a figment of your imagination,
                    All that IS is in this moment
                    which will never come again.

                    Comment


                    • Hmm...I'm using 3.4.3 and seems to be working fine... Curious.

                      Comment


                      • nope its not working at all so i had to go back all the way to 2.9 cause in 3.2 and 3.3 atlas doesnt work either
                        Yesterday never has been,
                        Tomorrow will never come,
                        Today is a figment of your imagination,
                        All that IS is in this moment
                        which will never come again.

                        Comment


                        • You're using atlas 14?

                          Comment


                          • yes V14 is the version that im using
                            Yesterday never has been,
                            Tomorrow will never come,
                            Today is a figment of your imagination,
                            All that IS is in this moment
                            which will never come again.

                            Comment


                            • Could try running it as admin. Not sure what else to suggest.

                              Comment


                              • my account is THE administrators account plus im running XP
                                Yesterday never has been,
                                Tomorrow will never come,
                                Today is a figment of your imagination,
                                All that IS is in this moment
                                which will never come again.

                                Comment

                                Working...
                                X