Localization, Localisation

Practical and concise answers to common questions in G11N, I18N and L10N

Posts Tagged ‘Automation’

Rookie Story: Where to Start with Localisation Management?

Posted by Nick Peris on October 11, 2011

Congratulations! You aced that interview a few weeks ago, and this morning you strolled into the office with a spring in your step! You had the HR induction and were introduced to your new colleagues. Now you’re logging onto the network, the company handbook reassuringly lying on the corner of your desk, or saved on your desktop.

Time to get started! The Company hired you to bring under control this thing almost mysteriously referred to as “Translations”. Your objectives are simple: reduce cost and improve quality. You are their first ever Localisation Manager, and you know the keys to your success will be the   standardisation and centralisation of all Localisation activities.

So what do you need to consider from a technical and organisational point of view?Flags, Nations, People

Getting to Know your Internal Customers

If there have been Translations in your Organisation, there are existing processes and linguistic assets you should be able to build on. You need to quickly learn about them by focussing on:

  1. Who are your allies? Each Department, Local Office etc. probably has at least one “Translation person”. Find out who they are and what they have been doing. Determine whether they will remain involved once you’ve established the new structure, or if they expect to be relieved of Localisation duties. All going well, you may be able to enroll some of them in an inter-departmental Localisation team, even if it’s only a virtual team.
  2. What is the inventory of current processes? Meet the current owners and document everything. No need for anything fancy since you are going to change these processes, but you need to have it all down so that when the inventory is finished you have an accurate and complete picture.
  3. What are the points common to all? Which of those processes work well and which don’t? The successful ones will be the building blocks for your future world.
  4. What are the specificities of each one? Which are worth keeping? Can they be used by other parts of the Organisation? Do they need to remain specific? Your new processes will need to achieve a balance between harmonisation and flexibility.
  5. Do any of those existing processes use technology such as CAT Tools, Content Management Systems, Translation Management Systems? If so should they be upscaled and shared across the Organization?
  6. Do any maintain linguistic assets like Glossaries, Style guides, Translation Memories or even just bilingual files which could be used to create TM’s?

Understanding your product lines

You need to understand what you are going to localise thoroughly before you can develop the processes. The question to answer are:

  1. What types of content: marketing, commercial website, Software, Help systems, self-service technical content, user-driven content like blogs etc. all those use very different registers, vocabulary, address etc. Moreover the choices made will differ again from one language to the next. Some content types require high volumes at low cost, such as Support content or product specifications. Some require high quality and creativity like Copywriting and Transcreation and you may even choose not to use TM’s for some of those. Some will be specific to parts of your Organisation while other will be global material. You will need to ensure a consistent Corporate identity across all these, in all languages.
  2. What are the fields: automotive, medical, IT require linguists with different backgrounds and specialisation. Make sure you know all the areas of expertise to cover during Translation and Review. For some you might to add Subject Matter Expert (SME) review to the more common step of Linguistic Review. Review changes will need to be implemented, communicated to Translators, fed into the TM’s, but the process will need to let SME’s take part in the process without having to learn CAT Tools.
  3. From a technical point of view you will also need to work with the content creators to determine the type of files you will receive from them and those they expect to receive back.
  4. Start a war on spread sheets as soon as possible. You probably won’t win it but the more you root out, the better. Teach your customers to understand how parsing rules protect their code by exposing only Localisable content during translation. Promote Localisation awareness during Development and Content creation. Document best practices such as avoiding hard-coded strings, providing enough space in the UI to accommodate the fact that some translations will be up to 30% longer than source text, at least if that is English.
  5. Your aim should be:
    • to receive files that can go straight to Translation with minimum pre-processing
    • to deliver files that your customers can drop into their build or repository for immediate use.
  6. No one should be doing any copy-paste engineering, manual renaming or file conversion.

Designing your Workflows

This can start with a pen and paper, a white board or whatever helps you think quicker, but it should end with a flowchart or set of flowcharts describing the process you’re setting up.

  1. Collaborate with your internal customers. You need to agree a signoff process, and avoid multiple source updates during or after the Translation process.
  2. Enumerate all the stages required and determine the following:
    • How many workflows do you need to describe all scenarios? Try to find the right balance: fewer workflows ensures efficiency, but too few workflows will lead participants to implement their own sub-processes to achieve their goals and you will lose control and visibility.
    • What stages do you need? The most common are:
      1. Pre-processing
      2. Translation
      3. Linguistic Review
      4. Post-Processing
      5. Visual QA
  3. Who are the owners of each step? Are they internal or external (i.e. colleagues or service providers)? How will you monitor progress and status? How will you pay?
  4. Is there a feedback loop and approval attached to certain steps? Will they prevent the workflow from advancing if certain criteria are not adhered to? Is there a limit to the number of iteration for certain loops?
  5. What automation can be put in place to remove human errors, bottle necks and “middle men” handling transactions.

Choosing your Vendors

Once you’ve determined which of your workflow steps need to be outsourced, you will need to select your providers. Linguistic vendors will likely be your most important choice.

Translation

In-house translators are a luxury rarely afforded. When choosing Translation vendors, first decide between Freelancers and Language Service Providers (LSP). Managing a pool of Translators is a job in itself, so most will hire the services of an LSP which will also be able to provide relief in terms of Project Management, Technology changes, Staff fluctuations depending on activity or holiday periods etc.. Having more than one LSP can be good strategic choice: it gives you more flexibility with scheduling and pricing. You can specialise your vendors according to content, region or strength. A certain amount of overlap is necessary for you to be able to compare their performance and benefit from a bit of healthy competition.

Linguistic review

Whichever setup you have for Translation, you will need linguistic review in order to ensure the integrity of the message is kept in the target languages. You will also need to ensure consistency between Translators or Agencies, check Terminology, maintain TM’s and Style guides.

Marketing and Local Sales Offices often get involved with that. However using internal staff removes them from their core tasks, unless you are lucky to have dedicated Reviewers. More than likely in-country colleagues will find it difficult to keep up with the volume and fluctuations of the Review work and ultimately will prove an unreliable resource. The solution is to hire the services of professional Reviewers. Many LSPs provide such services.  Some ask their competing providers to review each other, but that often results in counter productive arguments. A third-party dedicated review vendor will be the best to enforce consistency, accurately measure quality, maintain linguistic assets, and even manage translator queries on your behalf.

Selecting Technology

Translation Memory technology is a must. Which one you go for may be determined or influenced by existing internal processes, particularly if there are linguistic assets (TM’s and Glossaries) in proprietary formats. Your vendors may also have a preferred technology or even propose to use their own. If you go down that road, make sure you own the linguistic assets. The file format is another choice that needs to be made carefully from the start. Open source formats may save you from being locked into one technology. However technology vendors often develop better functionalities for their proprietary formats. It can be a trade-off between productivity and compatibility.

The good news is that conversion between formats is almost always possible. This means migration between technologies is possible, but avoid including conversion as a routine part of the process. Even if it’s automated, having to routinely output TM in several formats for example, will introduce inefficiencies and increased user support requirements.

Translation Management Systems have become so common, some think they are on the way out. You will at the very least, need a Portal to support file transactions, and share your linguistic assets with all the participants in your supply chain. Emails, preferably automatic notifications, should be used to support the transactions, but they should be avoided when it comes to file swapping. FTP is a common option, easy to set up, learn and cheap to run, but it can soon turn into a mess and gives you zero Project Management visibility. In order to achieve efficient status monitoring, resource pooling and any type of automation, you should consider a Translation Management System.

Whether you go for the big guns like WorldServer or SDL TMS, or for something more agile like XTRF TMS, you will reduce the amount of bottle necks in your process: handoffs will go straight from one participant to the next. The Project Managers will still have visibility, but no one will have to wait on them to pass on the handoff before they get started. TM’s will be updated in real-time and new content will become re-usable immediately.

A few things to look out for in your selection:

  1. Less click = shorter kickoff time. Setting up Projects in a TMS is an investment. It is always going to be longer then dumping files on an FTP and emailing people to go get them if you look at an isolated Project. As soon as you start looking at a stream of Projects TMS makes complete sense. Still, a TMS’s worst enemy is how many clicks it needs to get going.
  2. Scalability: you need the ability to start small and deploy further, without worrying about licenses or bandwidth.
  3. Workflow designer: demand a visual interface, easy to customise which can be edited without having to hire the services of the technology provider. Don’t settle for anything that will leave out at the mercy of the landlord.
  4. Hosting: weigh your options carefully here again. In-house is good if you have the infrastructure and IT staff. But letting the Technology provider host the product may a more reliable option. This is their business after all, maybe you don’t need to reinvent the wheel on that one.
  5. User support: the cost and responsiveness of the Support service is essential. No matter how skillful you and your team are, once you deploy a TMS to dozens of individual linguists there will be a non-negligeable demand for training and support. Make sure this is provided for before it happens.

Once you’ve made all these decisions, you will be in good shape to start building and efficient Localisation process. Last but not least, don’t forget to decide whether to spell Localisation with an “s” or a “z”, and then stick to it! 🙂

 

Related articles:

Crowdsourcing in Localisation: Next Step or Major Faux Pas?
Globalization – The importance of thinking globally
SDL Trados 2007: Quick Guide for the Complete Beginner
Which comes first, Globalization or Internationalization?
Who’s responsible for Localization in your organization?

 

Advertisement

Posted in Beginner's Guide | Tagged: , , , , , , , , , , , , , , , | 3 Comments »

memoQ 5.0: Mr. Q Brings Change Management to the Localisation Continuum

Posted by Nick Peris on June 21, 2011

 
Mr.Q presents: memoQ 5.0!Kilgray Translation Technologies introduced memoQ 5.0 to the World last week by means of a twin event. Gábor Ugray, Head of Development, hosted a webinar from the Kilgray HQ in Budapest for the online enthusiasts, while István Lengyel, COO, demo’ed it live from the Localization World 2011 conference in Barcelona.

MemoQ 5.0 will be available as a public Release Candidate on June, 30 2011 and should reach Final Release within a few weeks of that.

The Release Candidate version can be installed side by side with memoQ 4.5 and various upgrade paths will be available to current memoQ users.

Following the strong focus on Project Management in memoQ 4, the philosophy behind memoQ 5.0 is Change Management. Changes in source files are better managed through X-translate, while segment changes are tracked through a sophisticated versioning system. Illustrated examples of this and other new features are detailed below.

memoQ 5.0 Version Tracking

X-translate

The implementation of Major/Minor version control is powerful because of the simplicity with which it responds to a real need. A Translator is working on a file, receives an update to the source file, thanks to memoQ 5.0’s Major versioning feature, he or she can immediately generate an updated version of their bilingual file and continue translating.

There is no need to leverage, which would require a more labor intensive process of pre-translating again from Translation Memories. One can simply go straight from a partially translated copy of version 1.0 to a partically translated copy of version2.0.

The screencaps below show how to xTranslate a single file from the previous Major version of the file, then how the  xTranslated segments are marked and finally how to save a snapshot of the resulting file.

xTranslate1xTranslate2xTranslate3

It is also possible to export a 2-column file for comparison of 2 Major versions:

Export 2 columns to HTMLSide by side compare

Change Tracking

Change tracking enables segment level access to previous versions. The following images show how to enable custom track changes from the Translation menu, how the changes are highlighted in a document, and a further 2 options for translators and reviewers to see changes made to a file since they last edited it.

Track ChangesTrack Changes Against BaseTrack Changes (Reviewers)Track Changes (Translators)

Terminology in memoQ 5.0

Terminology extraction

MemoQ 5.0 will allow a substantial amount of Terminology work without requiring the use of a dedicated application such as qTerm. Users will be able to extract candidate terms from a Project:

Extracting Candidate TermsTerm Extraction Progress

Stop Words

The use of Stop Words list will ensure easy noise reduction by preventing words such as “and”, “the”, or any other short listed by the user, from appearing as Candidate Terms:

Creating and Editing Stop Word Lists

Reviewing Candidate Terms

Candidate Terms can then be reviewed in context and possibly against an existing Termbase:

 Term Extraction ResultMerging Candidate TermsAccepted TermsDropped Terms

Lexicon

The Lexicon option will let you work with a Terms list without having to go through the full process of creating a Termbase. It is meant as an easy-to-use, immediately rewarding tool to manage Terminology within a Project. This should encourage Linguists to run quick Term extractions before starting a job, especially in cases where a Termbase is not available as part of the Handoff, in order to efficiently get a general overview of the Terms contained in a set of source files.

MemoQ 5.0’s Terminology feature does not support the TBX format, however Kilgray’s fully-fledged terminology tool qTerm, does.

memoQ 5.0 and nested file formats

Another very effective idea implemented in memoQ 5.0 is the support for file formats containing code belonging to other file formats. An obvious application is the case where the handoff is a spread sheet containing strings copied from an xml or a software file. But there are other common cases such as XML files containing HTML code.

The requirement here is to parse files twice so that all codes are recognised as such and so that the linguist can concentrate on translating with full confidence that all tagging is managed by the CAT tool. Here are 2 examples:

Cascading Filters

      1. Cascading Filters for a spread sheet contain HTML: 
        HTML code in XLS - ExcelHTML code in XLS - memoQ 5.0Reimport As to Apply Second FilterAdding a Cascading HTML FilterDocument Import SettingsSaving Filter Configuration for Re UseFully Parsed File
      2. Cascading Filters with Regex Tagger for a spread sheet containing UI strings: Run Regex Tagger to re-Parse XLS FileRegular Expression PatternsAdding Patterns to Configuration

Source Content connectors

Finally, memoQ 5.0 will also in time be able to connect to repositories where content is dynamically added. It is designed with CMS integration in mind, however the CMS connectors will only be released later this summer, like the web-based editor webTranslate.

Posted in Kilgray, memoQ, News | Tagged: , , , , , , , , , , , , , , , , , , , , , , | 4 Comments »

Alchemy Catalyst 9.0: A Practical and Visual Guide

Posted by Nick Peris on November 15, 2010

I recently had the welcome surprise of finding an invite to a Catalyst webinar in my Inbox. It was with great anticipation and a touch of nostalgia for my Localisation Engineering days, that I clicked on the link and joined the meeting to discover what Alchemy had been up to.

I soon realised that a practical user’s guide would be the best way to cover this on Localization, Localisation. The Alchemy Software Development website already lists What’s New in this release so rather than analysing the differences between Catalyst 8, for which we did a complete Launch coverage and Catalyst 9, I’ve put together a step by step tour based on the demo.

This article can be used by Localisation Engineers and Translators alike to preview the Catalyst 9 interface using the 30 or so screen shots included (see after the slideshow for full screen versions), and also to read through some recommended processes and tips, adding to my past article on the Leverage and Update Experts.

This slideshow requires JavaScript.

Creating a Project

The User Interface remains the flexible and now very familiar .net window, with its various docked panels and tabs. It’s also a stable interface which will cause little or no navigation headache to even the most novice user.

The first operation when getting started with Catalyst is to create a Project file, or TTK file. This is easily done by using the File – New menu and following the basic steps.

You will notice in the screen shots that the example used includes varied sample files such as compiled help (.chm) not requiring any source or project files, and wpf executable.Locked strings

Preparing a Project

After the creation of the TTK, source files can be inserted either using the Insert Menu item or a context menu in the Navigator tab. Folder structures can also easily be used.

Once the files have been inserted into the TTK, it is time to prepare it for leveraging.Translator Tool Bar Context Menu and Keyword Lock This operation of consists mostly of locking non-translatable strings and sub strings. It can be tedious on a brand new Project but the work done can be completely leveraged to the various language TTKs as well as any future versions of the project.

The lock keywords functionality has been improved in Catalyst 9: the txt file which the project’s keywords list is now automatically generated in the background as soon as the user locks a keyword.Catalyst 9 UI Batch Keywords Locking

Once a keywords list has been created, it can in turn be used to automatically lock the listed keywords in the remainder of the project.

Another thing to note is that Maximum String Length can now be set on a batch of strings at once.

Leveraging previously translated content

Apart from Leveraging from the TTKs of previous projects, Catalyst supports leveraging from a variety of Translation Memory formats:Keywords List

  • Translation Industry Open Standard (*.tmx)
  • SDL Trados 2007 (*.tmw)
  • Wordfast Pro (*.txml)
  • Tab-delimited (*.txt)
  • Alchemy Translation Memory (*.tm)
  • Alchemy Catalyst (*.ttk)
  • Alchemy Publisher (*.ppf)

Alchemy Translation Memory is a new proprietary format used to create Master TMs from completed TTK projects. This format allows to store Catalyst-specific context information such as the context (Dialog box ID, Menu Item etc.), which can later improve the quality of leveraging by providing Perfect match. In Catalyst terms, a Perfect Match is a 100% match located in the same Dialog, Menu etc).TM Compatibility List

Alchemy Publisher, Wordfast Pro, Trados 2007 or the nonproprietary TMS are also present provide compatibility with other TM format Catalyst might have to coexist with.

Noticeably, Trados Studio 2009 TMs (.sdltm) still do not appear to be supported.

Batch processing

The process recommended by Alchemy is to create an English to English Master TTK and then to automate its duplication and pre translation for each target language in the Project.

This is an area where Catalyst 9.0 does seem to bring a good bit of novelty:Create Job Expert

  • With Catalyst 7, engineers had to manually duplicate TTKs.
  • Catalyst 8 was a bit more helpful and created Project folders for target languages and project resources.
  • In Catalyst 9.0 however, the Job file and Scheduler take care of a lot of the repetitive tasks associated with preparing a new Project.

The Create Job Expert lets you use the Master TTK as a template to create project folder structure and corresponding target language TTKs.

Meanwhile, such tasks can also be added to the Scheduler. This new queuing system allows the user to start working on the next project while it processes queued tasks in the background.Create Job Expert Batch Leverage

Automation

The Command line automation has been improved since Catalyst 8 to include Analysis. The complete Catalyst localisation process can now be automated.

Catalyst 9.0 Developer Edition also includes the Comm API which lets advanced users script TTK operations all the way down to string level, and output automation reports  in txt or xml format.

Ensuring Quality and Consistency

In addition to Translation Memories, Catalyst 9 also supports several Glossary formats:

  • Text files, used in Catalyst since the beginning (.txt)
  • Terminology Exchange Open Standard (.tbx)
  • Translation Memory Exchange can also be used for Terminology (*.tmx)
  • SDL MultiTerm and MultiTerm ServerCatalyst 9 inline Validation

Validation still takes two forms: the Expert can be run to perform global check, and inline validation can also be switched as a non-intrusive real-time quality control. If a potential error is found, a flag will be raised through the bottom pane, but Translators will not be interrupted. They can simply go back to the issue by clicking on the notification once they are ready to attend to it.

The Thumbnail view seems to be a great tool for engineers regressing bug. It gives a preview of all dialogs in a TTK and lets you click the one which matches for example the screen shot in a QA report and brings you automatically to the location of this dialog in the ttk file.Catalyst 9 Thumbnails

Translating in Catalyst

The Concordance search and Translator toolbar do not appear to have been changed. Both were introduced with Catalyst 8 where there was strong focus on improving the user experience from the Translator’s point of view, and they seem to have delivered.

The new Re-cycle button is a result of the same ambition. New translations can be propagated to entire project by using the current project as an inline TM in the background. Layouts are not recycled but fuzzies are supported.

Clean up Expert

Finally the Clean up Expert has also receive some improvements. Like for all Experts, it is recommended to close the Project file before running it, and then select the file(s) to process from the Expert’s General tab.

Clean up now creates a postproject.tm Translation Memory and generate supplied assemblies for .net.

Conclusion

in my opinion, this new generation of Catalyst still offers a great solution for visual localisation. Although the differences with Catalyst 8 may not may not make a bullet proof case for immediate upgrade, the 25% discount currently on offer does represent decent value.

Posted in Beginner's Guide, Catalyst, News, Software Localisation | Tagged: , , , , , , , , , , , , , , , , , | 1 Comment »

SDL TMS 2007 Service Pack 4: Love and Hate

Posted by Nick Peris on June 1, 2010

SDL TMS 2007 - Localisation workflow

I always find it challenging to get a fair idea of what Enterprise tools can do before making a purchase decision. There is so much involved in setting them up that even if a trial version is available, the efforts required to perform meaningful testing are prohibitive.

Many such applications do not come ready out-of-the-box and require extensive customisation before they can be tailored to fit a specific business model.

This is why many purchase decisions are executive decisions, based on ROI reports and presentations showing what the software does. A demo might be setup for you on a dedicated server by the sales person, and you’ll be left thinking “hum…surely it’s not that simple”. This is also why 10 times out of 10, these pieces of software come with a Support package which lets you install regular and much needed updates and bug fixes.

It doesn’t have to be this way!

If you have the opportunity, go knock on a few door and try to find a company nearby which uses the software in a production environment. Contact them, ask to visit, get an independent demo. From my experience (not based on TMS that time) most people will be more than happy to tell you how much effort it took to setup, how many features still don’t work, but also how much their productivity has really increased and perhaps even how many of their employees have done a thesis on the subject! Bottom line: get real-life advice!

SDL TMS, or Translation Management System, is one such behemoth application. Trying to find independent information about TMS on the web is a challenge. In fact, even finding official information can prove frustrating. As for Special Interest Groups… those I found were for customers-only. It seems it’s buy first, we’ll talk later.

So what’s the big deal exactly? Well I’ve been working with TMS 2007 for about a year now and I have a few things to report: some good, some not so good.

What it does well

Let’s start with positive thoughts.

TMS is a workflow tool, designed to connect a customer directly to it localisation vendors and all their armies of sub vendors. It handles big volumes and short turnarounds really well, and is reasonably good at supporting your Translation Memory and Terminology Management needs. It also offers the reporting facilities necessary for all members of your localisation ecosystem to invoice each other, and you.

TMS automates part of the role of the middle men, and is ideal for localisation consumers with a constant stream of translation, especially if they come in the shape of numerous small projects.

Multiple alternative workflows can be set up, depending on vendor selection, TMs to leverage against, TMs to update, need for Linguistic Review etc. Once the correct workflow is selected at the job creation stage, you can be sure it will go through all the steps required. There is little or no human error possible, at least not in scheduling and assigning tasks to the right participant.

TM updates are handled automatically, literally seconds after the last human input in the workflow.

Where it lacks

So are all the vendors really gathering orderly around the assembly line and localising thereafter like a happy family?

Not exactly. There are a few snags.

My main grief is around TM Maintenance or the lack of it. Because TMS automatically updates the Translation Memories at whatever stage of your workflow you told it to, manual editing of the TMs has been neglected. A user can perform a Concordance search, but it is impossible to edit the Translation Units found. One cannot use TMS to fix inherited inconsistencies or any error found in legacy TUs.

This makes implementing Global changes a very untidy task: one needs to connect to the TM Server (hosted by SDL in most cases) using SDLX 2007 Professional. This, to me is total non-sense and here is why:

  1. increasingly, the business model in Localisation is outsourcing.
  2. once localisation is outsourced to agencies, these subcontract Single Language Vendors, who themselves might only be sub-contracting to freelancers.
  3. less and less Localisation consumers employ in-house linguists.
  4. their remaining in-country staff is Sales and Marketing, and has much more pressing matters to attend than editing TMs.

Now which version are these freelancers more likely to have? SDLX 2007 Professional (€2,995) or SDLX 2007 Freelance (€760)? I think you probably guessed it. SDL’s licensing model prevents linguists from maintaining TMs in TMS and seemingly forces corporations which bought TMS to support their outsourcing setup, to fix TMs in-house!

There are some workarounds to this, but for a piece of software of this caliber, I think this is a pretty shocking limitation.

The integration with MultiTerm has similar issues: only some of the functionality are available through TMS, the rest including editing Term entries has to be done using MultiTerm Online or Desktop.

Performance issues also tend to drive a lot of linguists offline! Depending on their setup, a lot of them find it more efficient to download jobs, translate offline in SDLX and upload the finished work back into TMS. While there is technically no difference in the end result, this is a disappointing interruption of the workflow.

Service Pack 4: An End to the Suffering?

Squeezing under the gate at the last second, like Bruce Willis in a classic movie, TMS 2007 Service Pack 4 sneaks in before the long-awaited SDL TMS 2010 and comes to the rescue.

With TMS 2010 now possibly slipping into 2011, it is a welcomed addition particularly due to the improvements it brings. Here are the most significant end-user facing features:

Browser support: IE 8 support added (IE 6 removed in future)

TM import: ITD, zipped ITDs, MDB (SDLX TMs). This is a partial solution to the lack of TM Maintenance feature I’ve talked about in this article.

Continued lack of support for TMX is attributed to the fact that this open-source format has too many proprietary specifications.

Reporting formats added: CSV, Excel 2007, PDF, RTF, Word 2007.

Branding and Fonts are customisable (by Professional Services).

TMS 2010 is expected to have end-user customisable reports.

Segment level QA Model for Reviewer grading

QA Models

This all-new feature in SP4 is crucial if your workflow includes Linguistic Review. All changes made by the Reviewers are now recorded, and the Reviewers can tag them using customisable Error Rating and Categories.

  1. Error Ratings and Categories: support for LISA model, SAE J2450, TMS classic out-of-the-box.
  2. User-specific models can be created. Number of points deducted can also be specified in the QA Model.
  3. Records can be retained at segment (for feedback to translators) or project level
  4. Scoring methods: absolute or percentage
  5. To apply a QA Model: add it to a Configuration (i.e workflow), and it will be available to Reviewers working on jobs passed through this config.
  6. Reviewer usage: click Star at segment level to open the QA model window and enter Category and Rating.Pass/Fail status does not prevent reviewer from submitting or rejecting a job.

Posted in News, SDL TMS, Translation Management Systems | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , | 5 Comments »

memoQ 4: Interview with István Lengyel

Posted by Nick Peris on December 22, 2009

I have been trying to diversify the topics we cover on LocLoc; and especially the tools we talk about. It started recently with a QA tool and now continues with a CAT tool. I already know from the survey I’ve had on this page, that a lot of you are familiar with Kilgray’s memoQ. This, is a preview of what to expect from the forthcoming memoQ4, from the mouth of Kilgray’s COO, István Lengyel.

[Nick Peris] Hi István, could you introduce Kilgray and your role within the company?

[István Lengyel] Hi Nick! Thanks for inviting me to do this interview. Kilgray Translation Technologies is an independent company dedicated to the development of clean and innovative tools for translation, but so far we are by far the best known for our memoQ translation environment. Though we are based in Hungary and all the founders are Hungarians, we became quite an international team in the last two years, opening up in Germany, Poland and now in the US. It’s really great to work in this team, as we have people coming from all sorts of companies such as Idiom, Passolo, SDL Trados, etc., and every addition to the team opens up new perspectives and shows new approaches – the company culture builds on respect and cooperation.

I am one of the architects of memoQ and also the chief operating officer at Kilgray, though in reality I’m mostly managing our sales and marketing team and our international expansion.

[Nick] Could you give a general overview of what memoQ is for readers who are not familiar with it?

[István] memoQ is an integrated translation environment that has a couple of focal points. First, it is easy to use, easy to learn. Second, we translate a lot in it and manage memoQ’s localization in memoQ itself, so we developed an eye for details – there are lots of smaller features that really make life easier. Third, from the very beginning we were concentrating on collaboration, and even the first version included an internet-enabled TM/TB server. Fourth, we don’t believe that we should lock in any of our customers – the entire system supports interoperability between tools to the maximum extent, meaning that you can process files prepared by virtually any major translation tool, and you can also prepare files for processing in other tools. There’s also a full set of documented APIs available for integration with other tools. Fifth, leverage, which means that we are trying to make the most of your resources. There were a couple of things where memoQ pioneered: we were the first to introduce real-time previews that change as you type, we were the first to introduce communication such as knowledge bases and instant messaging and offline synchronization into a translation memory server, we were the first to introduce the translation memory-based segmentation where pre-translation emulates the way your translators join and split segments, and we were the first to introduce the automated concordancing. But quite frankly, we are just as happy to take over things that work from other tools as we are to introduce new stuff.

[Nick] I know you are preparing to release a new version; could you give us a release date for memoQ 4?

[István] A few days ago we named January 31, 2010 for the release date, but I was reminded that it’s a weekend. So the first week of February. (Well, who cares about weekends? :))

[Nick] What are the main changes from memoQ 3.5 and main reasons to upgrade?

[István] There are so many changes that I can hardly list them! memoQ 4 is the first memoQ version that really focuses on project management. We like to build bottom-up and believe that an organization will only have a good experience deploying a tool if the translators like it, and we spent the last five years making the translators happy. So let’s start with the revolutionary feature: post-translation statistics. Imagine a situation where several people are working on the same set of similar documents, using a server-based translation memory. There can be a lot of fuzzy matches coming from the other translator’s translated entries, but so far there was no way in any tool to enumerate these matches, because the person who starts working later gets more matches than the person who is the first to start. memoQ 4.0’s post-translation statistics will solve this Gordian knot, and give you the actual fuzzy match analysis for every translator after the project. This way finally there is a business model for server-based translation.

Other than this, the biggest change is that we have upgraded the concept of translation memory servers to the concept of resource servers. So far you could share translation memories, term bases and documents between translators, and you could set up projects for them centrally. In the new version, you can share every other resource such as auto-translatables (for people used to Trados lingo: customizable placeables), non-translatables, segmentation rules, QA settings, keyboard shortcut settings, ignore lists for the spell checker and so on – 12 of them, all together. What’s more, sharing this happens in the background so you can start the publication of a big TM on the server and go on managing other projects in the meantime. These resources can all be exported into an XML-based format so clever project managers can prepare them also automatically.

memoQ 4 also brings finally the concept of multilingual projects. You can create handoff packages and receive delivery packages, or you can simply publish a project on the server. Those who receive the handoff package can in turn create new handoff packages (handy for a multi-tier enterprise-MLV-SLV-translator setup), and through delivery the files and reports are updated automatically. The handoff packages are just zipped containers of open-source format data – XLIFF for documents, TMX for TMs and CSV for terminology. You can process the packages in any tool, so the users are not locked in.

Compared to these improvements, the brand new text editor, the completely revamped user interface and the streamlined quality assurance seem small. Even the previous version of memoQ got quite a lot of credits for its good support of bidirectional and CCJK languages, memoQ 4 takes this further and also introduces support for Indic languages. We are introducing a very advanced multi-tier undo/redo logic, real-time spell checking and other minor improvements. The quality assurance checks have also been dramatically improved and also the interface for fixing warnings has been fine-tuned.

And I failed to mention so many things! memoQ 4 is the single biggest upgrade memoQ ever received.

[Nick] For non-memoQ users, could you give us the main reasons to switch to memoQ 4?

[István] Because other people do and they are happy about it! 🙂 Just like every company, we make mistakes at times but there has not been any single case that anybody asked for a refund. Seriously, I think the main reasons to switch to memoQ are collaboration, interoperability and support. memoQ is a truly collaborative application, it is one of the few tools that enable simultaneous translation and proofreading on the same document, complete configuration of projects for your translators, or using several translation memories or term bases that can be local, remote — they can even be on different servers — or offline synchronized. The server is fast even on a HSDPA connection and it’s also very affordable – no wonder we have over 150 servers out there.

The other important aspect is interoperability. Our main market is language service providers, and an LSP can never say that they use only a single tool, period, otherwise they lose business and what’s more, they can also lose translators. With memoQ you can process documents and packages created by other tools, and you can prepare packages in industry-standard formats for other tools too. Therefore you don’t find yourself in a situation that you bought the tool because you liked it and then you have to fight with everyone around you to make it accepted.

And the third most important aspect is support. I think Kilgray’s support is just great – fast, focused and friendly.

[Nick] What is the pricing structure for memoQ 4?
What are the different Editions of memoQ 4?

[István] memoQ 4 comes in three client editions: translator standard, translator pro and project manager.

memoQ translator standard is for those translators who never work in teams. It does not enable access to servers and does not enable export of files into XLIFF or bilingual DOC, only memoQ’s proprietary MBD format. It also lacks the ContexTM (101%) matching which takes the context also into account, and comes without support. But the price tag is attractive: 99 euros a year.

The memoQ translator pro is the edition for professional translators and very small translation companies who don’t want to invest into a server solution. It costs 620 euros.

The memoQ project management edition comes with multilingual project management and reporting functionality and we charge around a thousand euros for that.

When it comes to server technology, we sell our solution with mobile (ELM or floating) licenses, meaning that companies can give away and take back licenses to translators over the internet. The initial package contains five mobile licenses, and we sell additional bundles of five licenses at very competitive prices. When it comes to servers, we prefer not to sell without a trial period of 30 days – we want everybody to use the tool, not just buy it for the drawer.

[Nick] How did you take into consideration user feedback during the development of memoQ 4?

[István] Oh I could name the people who contributed with their user feedback here! I think it’s worth mentioning how we work. Basically there are four people who decide on what gets into the next release, and every release has a theme. These themes are contained in our 5-year roadmap and we regularly come together for things that we call “walk in the woods”‘ – creative sessions outside the office where we discuss the main ideas and concepts. We personally talk a lot with users and try to learn the rationale behind their feature requests. These talks shape the main themes/features a lot. On top of that, we have a system to archive all the threads on feature requests, and we go through these regularly. I could give you a rather precise list of features for the next three versions!

So basically the user feedback is taken into consideration on two levels: when we realize that a business problem is hard to solve with memoQ, we incorporate the solution into the high-level concepts. The other level is the feature level where for example users request amendments to file filters or suggest small usability improvements. If these are justified, these can go straight into the feature overview.

[Nick] How is Terminology Management undertaken in memoQ 4? What are the Termbase formats supported?

[István] Terminology management is one of the most controversial components in memoQ! So far we only support CSV and – surprise-surprise – TMX as import formats and can also export into Multiterm XML. Why TMX? Just think about software localization and then the help and you’ll understand. With memoQ we decided that this is a translation tool and not a terminology application, and therefore we gave a finite set of attributes but something that is pretty comprehensive: you can have synonyms, definitions, notes, grammatical information, contexts, project, domain, subject, client information, and a few other fields. You can also have images in the term base, and forbidden term variants can also be flagged. From the workflow point of view, memoQ has had a term base moderation feature since v2.0 in 2006, which means that terminologists may need to approve all terms suggested by translators before they become final. Terminology matching is really exciting: you can use wildcards to indicate the end of the invariable part of every word in a term, i.e. for a language like Spanish you can enter cinturĂłn* de seguridad and that will also find cinturĂłnes de seguridad. For translators of Slavic languages this is really crucial (fuzzy matching does not always work for terms). I can list quite a few pros for memoQ’s terminology management but I must say that it’s a very practical approach. However, we understand that corporate terminology management is not a subset of translation, and terminologists may need some more freedom.

Expect that freedom in a third-party tool based on the memoQ engine soon.

[Nick] Is there anything specific to memoQ in the way Translation Memories are created and maintained?

[István] Translation memories are by default context-enabled in memoQ, and memoQ supports two kinds of contexts: the segment before and after and context bound to structural information. This latter means that if you have for example the software strings in an XML or Excel file, with an attribute indicating where the text appears, you will get a 101% match if the attribute is the same to the attribute where you originally entered this translation – this way you can shuffle the translatable strings and still keep the context information. If you speak the Idiom lingo, this is very similar to ICE and SPICE matching.

As for maintenance, there are a couple of things that are quite unique. First, a 100% or 101% match for us is only a match that is identical both in content and formatting to the original. But we have a special bracket, 95-99% that contains segments where numbers, formatting, whitespaces, punctuation marks can be different. Any change in the text results in something lower than that. You can join and split segments wherever you want, and when you get an update to the document, the TM-driven segmentation will automatically join and split the segments according to your previous translation, as it looks into the translation memory for better matches through joining and splitting. During pre-translation, cases where you get multiple 100% matches (because you translated the segment differently in two contexts, and this third context is unknown so far) are flagged and they are very easy to locate. All these features fall under the umbrella term we use for design: “reproducibility”. I think it’s also worth mentioning that memoQ has a built-in TM editor and can work with as many TMs at a time as you wish. Oh, and yes, a minor nuance, just to make things elegant and please those who are really tech-savvy: our support for TMX also covers attributes, so if you import a TMX file coming from another tool that has attributes, even if the TMX attributes there cannot be displayed in memoQ, you can expect that the TMX export from memoQ will preserve and contain them – so memoQ does not swallow the information that it cannot process.

[Nick] Is there any new feature in memoQ 4 you are particularly fond or proud off? Maybe some anecdote about features which took you a lot of efforts to achieve and which you are now very happy to bring to memoQ 4 users?

[István] Well, I’m a person who prefers the big picture to the small details, and for me the biggest achievement – and a big praise goes to Gábor Ugray, our head of development who designed these features – is that the tool did not get more complicated for translators according to the feedback of those users whom we showed the system. We always pay a lot of attention to the user interface, but when we started conceptualizing memoQ 4 about two years ago, keeping its simplicity seemed like a daunting task. The visual marker of the entire resource management and multilingual project management feature is now just two drop-down lists: the server selector and the language selector. And I am of course proud of the fact that the resource concept makes the entire system future-proof – no matter what sort of a linguistic resource comes into existence in the next years, we’ve got a place for it, and savvy users are also welcome to write third-party resource managers.

[Nick] We are seeing a merging trend where tools are less specific to either software or documentation. This is partly due to the content types evolution, and partly to an effort by tool developers to become more all encompassing. How does memoQ fit into this? How is your support for software localisation? Also xml and xliff?

[István] I saw this very much in 2005 when we started off but I don’t see it that much anymore. About a year ago or so we implemented visual localization support for RESX files and quite a few users are using it, but we have no plans to implement visual localization for other formats such as RC or binary files. On the other hand there are quite a few considerations in memoQ that make it a very good tool for localizing Help content. I already mentioned the TMX import into the term base and the support for context based on another column in the Excel file or an attribute in the XML file, I’d like to mention the automated concordancing feature that was inspired by one of our translation jobs – in our earlier lives as translators – where TM management (another issue I could talk about for hours) was virtually non-existent. I don’t want to name the end-client and the LSP we got this from (they are both very reputable and well-known in localization), but basically to translate the help of version 8 of a well-known application we only got a TM that contained version 2 to 7 of the same application. No terminology, no localized software strings for version 8, nothing. We spent hours to find out what screen caption has been translated before and what expressions did we have to coin, because – as it is with software – quite a few of them were 8-10 words long, and of course developers make changes to these every now and then, changing one or two words maximum, adding a few words to the end, etc. The automated concordance automates this manual process: it automatically gives you the longest multiword expressions that appear at least a given number of times in the translation memory. It does not give you the translation in most cases, but if you select it, it opens the concordance window with the right expressions. And yes, the concordance can look for a series of words. So basically we don’t want to take away business from the excellent software localization tools, but we definitely want to be the best technology for translating help and manuals.

[Nick] Do memoQ and Kilgray offer workflow technology allowing supplier and clients in the localisation chain to work together online?

[István] Our workflow is a linguistic one, and not a highly structured one. We coined two terms. For us, horizontal workflow means when people work together on the same task. Vertical workflow is the traditional workflow, passing along the files between different people doing different jobs. memoQ is excellent in helping people work together on the same task and has a lot of workflow tools such as moderated term bases, simultaneous translation and proofreading, different forms of review, communication and knowledge bases, etc. From the point of view of traditional workflows, we only cover translation and review – items that happen within the tool. There’s no way to integrate things like source text review, DTP or settlements into memoQ. However, the extensive set of APIs enable integration with workflow tools, and at this point I have to mention that both Beetext Flow and Plunet Business Manager do a great job when it comes to deep integration. They can both take care of the entire process, and generate and maintain the projects automatically in memoQ. One of the things we are putting a lot of emphasis on nowadays is client review. I think memoQ is one of the best tools for this, but there is still a lot of room for improvement.

[Nick] Could you say a few words about the memoQ support network? How can new users avail of the experience of other users and if necessary receive support from Kilgray directly?

[István] Here are a couple of interesting resources: http://rc.kilgray.com – the Resource Center that contains training videos, guides, filter configurations for XML-based file formats, but also interesting articles on general topics such as TM management, technology purchase pitfalls, etc. for people and companies not using memoQ.

The memoQ Yahoo! Group (http://tech.groups.yahoo.com/group/MemoQ/) offers the expertise of other users but we also contribute often, and hey, you have the best experts of the competition also there and they often contribute too.

There is a memoQ wikibook too, and the forums on proz.com and other sites can also be interesting.

If direct support is required, it’s primarily through our support email address – please don’t publish the address directly on your website, we don’t want more spam there, but it’s at kilgray.com.

[Nick] Is it too early to ask you about roadmap? What are you plans for memoQ?

[István] It’s not too early at all, but I’m afraid I can’t tell much about the big improvements at this point. One thing is for sure – after 4.0, we will relax a bit and iron out any rough edges that may have remained in this brand new tool. One of the things that many users asked for and will be there in 4.1 (or whatever the final version number will be) is the bilingual DOC table format for review with comments. But one thing is for sure, you can expect another major version with a huge new resource in 2010.

[Nick] This has been a very informative interview. I thank you for your time and detailed answers and look forward to reviewing memoQ4 in the new year!

Posted in Interviews, Kilgray, memoQ | Tagged: , , , , , , , , , , , , , , , , , , | 3 Comments »

Alchemy Catalyst 8.0: Official Launch

Posted by Nick Peris on May 4, 2009

Alchemy Catalyst 8.0

On Friday, May 1st 2009, Alchemy Software Development officially launched a new iteration of their visual localisation tool and flag-ship product: Catalyst 8.0.

The event was held in Dublin (Ireland)’s Alexander Hotel, minutes away from Alchemy’s HQ. On offer were a feature highlights demo by Director of Engineering and Chief Architect Enda McDonnell, an informal meet-the-developers opportunity and client case studies by representatives of Citrix, Creative and Symantec.

This article reports and comments on some of what was said and shown.

A Total Visual Localization™ solution

Created mostly as a software localisation tool, Catalyst has now clearly outgrown this limiting description. The trademark visual editing capabilities now cover most aspects of localised content publishing:

  • Help
  • Web sites
  • Software applications

Reaching out to translators

But Catalyst is sometimes still seen as an engineer’s tool. Alchemy are aware of this and have been listening to feedback from professional translators. The result is a translating environment which undeniably seems more linguist-friendly. There is a convergence with the interactive translation environment in Trados, which is only a part of a general strategy to increase translators productivity by lowering the time needed to get accustomed to various tools.The New Translator Toolbar

  • Translator tool bar:
    • live validation: flagged with non-intrusive warning symbols
    • keywords: locking and validation for in-segment non translatables
    • internal tag management
    • multiple matches displayed
  • Switch to the industry-standard terminology exchange format (TBX)
  • Supplementary Glossary for translators to populate their own reference material
  • Unlimited number of TM’s and web-based Machine Translation (MT) service ensure there is always a match

Changes to ezParse

In order to keep up with the long-served ambition of providing support for the latest file formats, changes have been made to Catalyst’s parsing tool.

  • WPF (baml): full compatibility including visual editing of WPF forms and parsing out of.NET 3.0 objectsA WPF Form in Catalyst 8.0
  • Conditional XML: can now set the value of an element (or one of its attributes) to be localisable only if the value of another of its attributes indicates it should be treated as such (similar to functionality added to the settings file in Trados 2007).
    Conditional XML
  • Multilingual XML: supported by reading the source segment in one element but storing the translation entered into another. While this is a very up-to-date feature, there seems to be some limitations in term of process. The translators will only deal with one language pair, so post-translation engineering will involve leveraging from multiple partially translated TTK’s back into the “Master” TTK before a fully multilingual file can be extracted. This should however be made easier by the updates made to Experts such as Leverage.Multilingual XML

Updates to the ExpertsThe Leverage/Update Expert

  • Programmable API’s (Com and Event) are provided to encourage client-developed automation. This was a strong theme across both the Alchemy presentation and most of the guest speakers’. It has been a feature of Catalyst for some time but is now emerging as the area where Catalyst gets ahead of the CAT pack.
  • Multiple TTK’s, multiple languages and multiple TM’s to leverage from, all at once: this sounds like great news and is the feature I personally look forward to the most.
  • Target folders can be set and original TTK’s preserved (necessary to achieve previous point).
  • Leverage algorithm improved to search for 100% match in all TM’s provided before searching for fuzzy matches.

Cutting-edge Technology Thumbnails

  • Improved navigation: thumbnails for Forms, Dialogs, WPF, HTML, graphics…are the latest addition to the visual features.
  • Improved validation: live and programmable (API). Catalyst 8.0 comes with an updated list of validation tests and also offers the ability to create your own: custom .NET objects can be called by Catalyst during Validation but also file insertion, extraction etc.
  • Underlying technology upgrades make Catalyst future-ready: compiler upgraded to Visual Studio 8 which is relevant both to Windows 7 compatibility and a future 64-bit Catalyst)

Screen caps courtesy of Alchemy

Posted in Catalyst, News, Software Localisation | Tagged: , , , , , , , , , , , , , , , , , , , | 4 Comments »

XML in Localisation: What can it really do for us?

Posted by Nick Peris on April 8, 2009

Have you ever wondered how xml could possibly be relevant to our needs? Localising xml files is pretty much straight forward. But what of using XML to localise? From English XML to Localised RTF, HTML, PDF ... and XML

As localisation professionals we’ve all known about XML for quite some time now. We understand that as a Markup Language, it is closely related to HTML. We also know that it is Extensible, meaning that the tags and structure are user-specific. This gives us the picture of a very powerful and flexible language.

But I’m sure we also all have come across an xml-based document (a “.xml file”), which we have launched in our favorite browser, only to be treated to a pretty unattractive page of…XML code!

So what can that powerful and yet somewhat undefinable animal really do for us?

This article shows a practical example of xml technology applied to a specific localisation process. In doing so, it also illustrates some of the advantages of having a dedicated Localisation Team or Department, rather than allowing various departments in an organisation to manage their own localisation. In this case, a simple handover of responsibilities from a Marketing team to a Localisation team generated a major leap forward in process, efficiency and quality control. Here is how:

Original setup

In this organisation, the process for creating and localising marketing and web content was the following:

  • 1 master document – the product sheet – was created for each new product released.
  • The product sheet was localised into 13 languages.
  • Relevant  sections were pasted individually into the website for each language.
  • Relevant sections were also pasted individually into a printable version which was converted to PDF again for each language.
  • The localised doc files were also circulated.

There were 2 major issues with this:

  1. Copying and pasting made the process extremely time consuming and error prone.
  2. No translation memory system was used, making leveraging impossible and quality control of the localised content solely reliant on proof readers.

Solution implemented

The Localisation team was handed over the responsibility of localising this content mainly to free-up Marketing resources. Rather than simply taking over, they identified opportunities for improvement and initiated an R&D effort in xml Single Source Publishing. The goal now was to automate as much of the process as possible, and free-up time within the agreed standard turnaround for systematic quality control.

The new process ended up as follows:

  • Product sheet created in xml by the authors, using the free WYSIWYG XML authoring tool Altova Authentic®.
  • The xml schema was designed to be compatible with the web content management system used to create localised product pages.
  • A Trados ini file was created to parse out all non-localisable content in the xml code.
  • XSL Transformation and Apache FOP were used to automatically generate all localised XML, HTML, RTF and PDF copies after post-translation processing in Trados.
  • A VB Developer created a tool to manage all Altova StyleVision®-based automation from one single UI.

Result

  • Upload of complete xml product sheets to the website for each language rather than copying and pasting independent fields (unfortunately batch upload was not permitted by the web content management system).
  • Internet team saved 75% on the time required for localised product webpages to go live.
  • Other content types were all published simultaneously.
  • Use of Translation Memories and pro-active Terminology Management cut cost and increased consistency.
  • Thorough Quality Checks were also processed in batch using QA Distiller™ which helped catch multiple terminology and value errors before publication.

The key to the success of this new setup, apart from choosing to use XML, was the ability to revise the process from beginning to end. Because the Localisation team were allowed to have a say in the authoring process, efficiencies were generated on the whole span of the Marketing and Web content creation and XML Single Source Publishing was successfully implemented.

Posted in XSLT and FOP | Tagged: , , , , , , , , , , , , , , , , , , | 1 Comment »