What are tokens?
A token is a set of characters that is interpreted by software to have a specific meaning. Some tokens represent other data, while other tokens give more information about how data should be treated.
Tokenization is applied differently in many industries. In banking, tokens represent sensitive data, while in gaming, tokens might reference user-specific data that can change from one player to the next.
Read on to find out how important tokens are for localization, then discover how using the right localization platform can make it easy to incorporate tokens into your workflow.
What kind of tokens are used in localization?
Localization has many uses for tokens. Here are three of the most common:
- Variables are tokens that represent dynamic data in translated text. This data might include a player’s name, the cost of an item, an in-game location, or the player’s score. These data items are stored in a database, and the token automatically fetches the correct item depending on the context.
- Formatting tokens indicate how text should be displayed. For example, on a website, tokens may instruct browsers to display some text as headers. Markup and HTML are two popular methods for creating formatting tokens during localization.
- Code tokens stand in for snippets of code that are executed when the project is compiled. More complex than formatting tokens, these can be used to embed video files or perform other advanced functions.
How to use tokens when localizing
When used effectively, tokens can add flexibility and customizability to localizating, making the process much easier. We’ve compiled our top token tips to make sure you get the most out of tokens when localizing with Gridly.
1. Define your tokens
Make sure your entire team can recognize tokens and that everyone knows what each token means. Translators must be able to spot tokens within strings and understand what the token represents. Otherwise, translators may accidentally translate or mistype tokens.
2. Validate your tokens
Before passing texts onward for production, validate the tokens in each language as part of your localization quality assessment (LQA) process. Misused tokens can cause text to be displayed incorrectly, and faulty code tokens may even break your build.
First, run a token check on your source language. Confirm that the tokens in your source language are all supported by your project, and that they’ve been correctly added into the text. If you’re using pivot languages in a multi-step localization project, repeat this process with these languages.
Then, check the tokens in your target languages. This often means checking to make sure the tokens are used the same way in your source and target languages, but sometimes, you’ll have to check each language individually.
Added or removed tokens
Since languages often have different grammar and syntax rules from one another, it may not be possible to use the same tokens in the same way across all languages in your project.
For example, Japanese speakers may sometimes use the name of the person they’re speaking with instead of referring to them as “you.” Consider the following string:
Source (English): You are very cool!
Target (Japanese): “[player] はかっこいいですね!” — [The player’s name] is very cool!
In other situations, the speaker may omit any overt reference to the person they’re addressing. In that case, the translation would look like this:
Source (English): You are very cool!
Target (Japanese): “かっこいいですね!” — Very cool!
If addressing a senior figure, speakers may use their title instead of their name or a pronoun. Here’s that sentence again:
Source (English): You are very cool!
Target (Japanese): “[title] はかっこいいですね!” — [The player’s title] is very cool!
You may not even have a [title] token in English, and you’d need to create one in this case to localize into Japanese.
It’s best to check languages separately for token consistency, since not all languages will use the same tokens in every string, or in every way.
3. Use tokens to search
Using tokens to search through your content is a quick way to check that they’re implemented properly, or if they need to be modified. Include tokens in regular expressions (regex) searches to quickly find the content you need.
3 easy ways to manage tokens in Gridly
Ready to add tokens to your localization workflow? Gridly is a content management system designed from the ground up for localization — and that means we’ve included several great ways to make sure your tokens are good to go.
Learn more > how spreadsheet-based content management works
1. Use Tags
Tags are Gridly’s built-in token detection and management system. Gridly uses Tags to keep your tokens organized and help them stand out to your translation team. With Tags enabled for your Grid, tokens will be highlighted so translators know to avoid translating them when included as part of a string in a column containing content marked for localization.
Gridly automatically detects tags in real time using three methods:
- Auto-detect: Gridly scans your content for tokens matching its internal list of the most frequently used tags by various technologies and markup languages, such as Ruby on Rails, Gettext, C#, Razor, and more.
- Detect with RegEx: Gridly will detect tags matching your search criteria. For example, to find Tags with curly brackets {}, input the following selection into the RegEx command: {[^}]+}.
- Detect custom tags: Input your own list of custom tags, and Gridly will detect them throughout your project. This option is best for content featuring a large amount of variable tokens, such as in a mobile game.
When included as part of a string in a column containing content marked for localization, tokens will be highlighted so translators know to avoid translating them.
Gridly not only auto-detects Tags, but it also compares Tags between source and target languages. Tags without a match can be easily filtered out.
For more information on how to manage your Tags, check our detailed guide to using Tags in Gridly.
2. Include tokens in your localization glossary
A localization glossary contains frequently used terms along with approved translations and other helpful information to aid translators in their work. Include tags in your glossary so your translators can find out what your tokens mean and how they should be used.
When Gridly detects a token in your source text, it’ll display a glossary entry with the relevant information. Translators will know exactly how to work with that token when localizing that string.
3. Validate tags with AWS Lambda functions
Gridly supports the use of AWS Lambda functions to run code within your Grids without needing any supporting infrastructure. You can write a Lambda function that detects tokens in your source language and checks if that token is in your project’s list of supported tokens. The function can output its findings into a new column that you can quickly scan for issues.
AWS Lambda functions can also help when validating tokens as part of LQA. Run a function that shows how tokens will be displayed on the front end of your game or app to easily detect problems that might be hard to identify by looking at the text string alone.
Boost your localization process with tokens the easy way
Tokens are a great way to make your localization process smoother, but there are many ways things can go wrong. Instead, steer your project down the right path with Gridly’s built-in token support features.
Autodetection and glossaries help translators interpret tokens correctly while validating tokens immediately when translators type them into Gridly. Meanwhile, Gridly’s fine-tuned content filtering mechanics enable you to quickly check for incorrect tokens with just a few clicks.
Features like these are just a small portion of what’s waiting for you when you manage and localize content with Gridly. Get started today.