From Wikipedia, the free encyclopedia
Unicode is an industry standard designed to allow text and symbols from all of the writing systems of the world to be consistently represented and manipulated by computers. Developed in tandem with the Universal Character Set standard and published in book form as The Unicode Standard, Unicode consists of a character repertoire, an encoding methodology and set of standard character encodings, a set of code charts for visual reference, an enumeration of character properties such as upper and lower case, a set of reference data computer files, and rules for normalization, decomposition, collation and rendering.
The Unicode Consortium, the non-profit organization that coordinates Unicode's development, has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments. Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including XML, the Java programming language and modern operating systems.
Origin and development
Unicode has the explicit aim of transcending the limitations of traditional character encodings, such as those defined by the ISO 8859 standard which find wide usage in various countries of the world but remain largely incompatible with each other. Many traditional character encodings share a common problem in that they allow bilingual computer processing (usually using Roman characters and the local language) but not multilingual computer processing (computer processing of arbitrary languages mixed with each other).
Unicode, in intent, encodes the underlying characters graphemes and grapheme-like units rather than the variant glyphs (renderings) for such characters. In the case of Chinese characters, this sometimes leads to controversies over distinguishing the underlying character from its variant glyphs (see Han unification).
In text processing, Unicode takes the role of providing a unique code point a number, not a glyph for each character. In other words, Unicode represents a character in an abstract way and leaves the visual rendering (size, shape, font or style) to other software, such as a web browser or word processor. This simple aim becomes complicated, however, by concessions made by Unicode's designers in the hope of encouraging a more rapid adoption of Unicode.
The first 256 code points were made identical to the content of ISO 8859-1 so as to make it trivial to convert existing western text. A lot of essentially identical characters were encoded multiple times at different code points to preserve distinctions used by legacy encodings and therefore allow conversion from those encodings to Unicode (and back) without losing any information. For example, the "fullwidth forms" section of code points encompasses a full Latin alphabet that is separate from the main Latin alphabet section. In Chinese, Japanese and Korean (CJK) fonts, these characters are rendered at the same width as CJK ideographs rather than at half the width. For other examples, see Duplicate characters in Unicode.
Also, while Unicode allows for combining characters it also contains precomposed versions of most letter/diacritic combinations in normal use. These make conversion to and from legacy encodings simpler and allow applications to use Unicode as an internal text format without having to implement combining characters. For example ι can be represented in Unicode as U+0065 (Latin small letter e) followed by U+0301 (combining acute) but it can also be represented as the precomposed character U+00E9 (Latin small letter e with acute).
The Unicode standard also includes a number of related items, such as character properties, text normalisation forms and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic or Hebrew, and left-to-right scripts).
Unicode covers almost all scripts (writing systems) in current use today, including:
Unicode has added further scripts and will cover even more, including historic scripts less commonly used as well as extinct ones for academic purposes:
Further additions of characters to the already-encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur. The Unicode Roadmap lists scripts not yet in Unicode with tentative assignments to code blocks. Invented scripts, most of which do not qualify for inclusion in Unicode due to lack of real-world usage, are listed in the ConScript Unicode Registry, along with unofficial but widely-used Private Use Area code assignments. Similarly, many medieval letter variants and ligatures not in Unicode are encoded in the Medieval Unicode Font Initiative.
Mapping and encodings
- See also: Mapping of Unicode characters
The Unicode Consortium, based in California, develops the Unicode standard. Any company or individual willing to pay the membership dues may join this organization. Members include virtually all of the main computer software and hardware companies with any interest in text-processing standards, such as Apple Computer, Microsoft, IBM, Xerox, HP, Adobe Systems and many others.
The Consortium first published The Unicode Standard (ISBN 0-321-18578-1) in 1991, and continues to develop standards based on that original work. Unicode developed in conjunction with the International Organization for Standardization, and it shares its character repertoire with ISO/IEC 10646: the Universal Character Set. Unicode and ISO/IEC 10646 function equivalently as character encodings, but The Unicode Standard contains much more information for implementers, covering in depth topics such as bitwise encoding, collation and rendering. The Unicode Standard enumerates a multitude of character properties, including those needed for supporting bidirectional text. The two standards do use slightly different terminology.
When writing about a Unicode character, it is normal to write "U+" followed by a hexadecimal number indicating the character's code point. For code points in the BMP, four digits are used; for code points outside the BMP, five or six digits are used, as required. Older versions of the standard used similar notations, but with slightly different rules. For example, Unicode 3.0 used "U-" followed by eight digits, and allowed "U+" to be used only with exactly four digits in order to indicate a code unit, not a code point.
Unicode revision history
Storage, transfer, and processing
So far, Unicode has appeared simply as a means to assign a unique number to each character used in the written languages of the world. The storage of these numbers in text processing comprises another topic; problems result from the fact that much software written in the Western world deals with 8-bit or lower character encodings only, with Unicode support added only slowly in recent years. Similarly, in representing the scripts of Asia, the ASCII based double-byte character encodings cannot even in principle encode more than 32,768 characters, and in practice the architectures chosen impose lower limits. Such limits do not suffice for the needs of scholars of the Chinese language alone.
The internal logic of much 8-bit legacy software typically permits only 8 bits for each character, making it impossible to use more than 256 code points without special processing. Sixteen-bit software can support only some tens of thousands of characters. Unicode, on the other hand, has already defined more than 100,000 encoded characters. Systems designers have therefore suggested several mechanisms for implementing Unicode; which one implementers choose depends on available storage space, source code compatibility, and interoperability with other systems.
Unicode defines two mapping methods:
- the UTF (Unicode Transformation Format) encodings
- the UCS (Universal Character Set) encodings
The encodings include:
- UTF-7 a relatively unpopular 7-bit encoding, often considered obsolete
- UTF-8 an 8-bit, variable-width encoding, which maximizes compatibily with ASCII.
- UTF-EBCDIC an 8-bit variable-width encoding, which maximizes compatibility with EBCDIC.
- UCS-2 a 16-bit, fixed-width encoding that only supports the BMP, considered obsolete
- UTF-16 a 16-bit, variable-width encoding
- UCS-4 and UTF-32 functionally identical 32-bit fixed-width encodings
The numbers in the names of the encodings indicate the number of bits in one code value (for UTF encodings) or the number of bytes per code value (for UCS) encodings. UTF-8 and UTF-16 are probably the most commonly used encodings.
UTF-8 uses one to four bytes per code point and, being compact for Latin scripts and ASCII-compatible, provides the de facto standard encoding for interchange of Unicode text. It is also used by most recent Linux distributions as a direct replacement for legacy encodings in general text handling.
UCS-2 is an obsolete, 16-bit fixed-width encoding covering the Basic Multilingual Plane only. For characters in the Basic Multilingual Plane (16 bit range), UCS-2 and UTF-16 are identical. Therefore they can be considered as different implementation levels of the same encoding. The UCS-2 and UTF-16 encodings specify the Unicode Byte Order Mark (BOM) for use at the beginnings of text files, which may be used for byte ordering detection (or byte endianness detection). Some software developers have adopted it for other encodings, including UTF-8, which does not need an indication of byte order. In this case it attempts to mark the file as containing Unicode text. The BOM, code point U+FEFF has the important property of unambiguity on byte reorder, regardless of the Unicode encoding used; U+FFFE (the result of byte-swapping U+FEFF) does not equate to a legal character, and U+FEFF in other places, other than the beginning of text, conveys the zero-width no-break space (a character with no appearance and no effect other than preventing the formation of ligatures). Also, the units
FF never appear in UTF-8. The same character converted to UTF-8 becomes the byte sequence
EF BB BF.
UTF-16 is similar to UCS-2 but can include one or two 16-bit words in order to cover the supplementary characters (introduced from Unicode 3.1 onwards). UTF-16 is used by many APIs, often for upward compatibility with APIs that were developed when Unicode was UCS-2 based, or for compatibility with other APIs that use UTF-16. UTF-16 is the standard format for the Windows API (though surrogate support is not enabled by default) and for the Java (J2SE 1.5 or higher) and .NET bytecode environments.
In UTF-32 and UCS-4, one 32-bit code value serves as a fairly direct representation of any character's code point (although the endianness, which varies across different platforms, affects how the code value actually manifests as an octet sequence). In the other cases, each code point may be represented by a variable number of code values. UTF-32 is widely used as internal representation of text in programs (as opposed to stored or transmitted text), since every Unix operating system which uses the gcc compilers to generate software use it as the standard "wide character" encoding. Recent versions of the python programming language (beginning with 2.2) may also be configured to use UTF-32 as the representation for unicode strings, effectivelly disseminating such encoding in high-level coded software.
Punycode, another encoding form, enables the encoding of Unicode strings into the limited character set supported by the ASCII-based Domain Name System. The encoding is used as part of IDNA, which is a system enabling the use of Internationalized Domain Names in all languages that are supported by Unicode.
GB18030 is another encoding form for Unicode, from the Standardization Administration of China. It is the official character set of the People's Republic of China (PRC).
Ready-made versus composite characters
Unicode includes a mechanism for modifying character shape and so greatly extending the supported glyph repertoire. This covers the use of combining diacritical marks. They get inserted after the main character (one can stack several combining diacritics over the same character). However, for reasons of compatibility, Unicode also includes a large quantity of pre-composed characters. So in many cases, users have many ways of encoding the same character. To deal with this, Unicode provides the mechanism of canonical equivalence.
An example of this arises with Hangul, the Korean alphabet. Unicode provides the mechanism for composing Hangul syllables with their individual subcomponents, known as Hangul Jamo. However, it also provides all 11,172 combinations of precomposed Hangul syllables.
The CJK ideographs currently have codes only for their precomposed form. Still, most of those ideographs evidently comprise simpler elements (radicals), so in principle Unicode could decompose them just as happens with Hangul. This would greatly reduce the number of required code points, while allowing the display of virtually every conceivable ideograph (which might do away with some of the problems caused by the Han unification). A similar idea covers some input methods, such as Cangjie and Wubi. However, attempts to do this for character encoding have stumbled over the fact that ideographs do not actually decompose as simply or as regularly as it seems they should.
A set of radicals was provided in Unicode 3.0 (CJK radicals between U+2E80 and U+2EFF, KangXi radicals in U+2F00 to U+2FDF, and ideographic description characters from U+2FF0 to U+2FFB), but the Unicode standard (ch. 11.1 of Unicode 4.1) warns against using ideographic description sequences as an alternate representation for previously encoded characters:
- This process is different from a formal encoding of an ideograph. There is no canonical description of unencoded ideographs; there is no semantic assigned to described ideographs; there is no equivalence defined for described ideographs. Conceptually, ideograph descriptions are more akin to the English phrase, an e with an acute accent on it, than to the character sequence <U+006E, U+0301>.
Many languages, including Arabic and Hindi, have special orthographic rules which require that certain combinations of letterforms be combined into special ligature forms. The rules governing ligature formation can be quite complex, requiring special script-shaping technologies such as OpenType (by Adobe and Microsoft), Graphite (by SIL International), or AAT (by Apple). Instructions are also embedded in fonts to tell the operating system how to properly output different character sequences. In simpler cases, such as the placement of combining marks or diacritics, fixed-width fonts sometimes employ a method known as "sidebearing" in which the special marks preceed the main letterform in the datastream and the font rendering software knows to combine the marks into a final form. This method works only for some diacritics, and may fail to properly handle stacked marks.
As of 2004, most software still cannot reliably handle many features not supported by older font formats, so combining characters generally will not work correctly. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance can vary greatly across software applications. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly. As a workaround, Unicode characters that map to precomposed glyphs can be used for many such characters. The need for such alternatives inherits from the limitations of fonts and rendering technology, not weaknesses of Unicode itself.
Unicode in use
Unicode has become the dominant scheme for internal processing and sometimes storage (though a lot of text is still stored in legacy encodings) of text. Early adopters tended to use UCS-2 and later moved to UTF-16 (as this was the least disruptive way to add support for non-bmp characters). The best known such system is Windows NT (and its descendants, Windows 2000 and Windows XP), which uses Unicode as the sole internal character encoding. The Java and .NET bytecode environments, Mac OS X, and Unix desktop KDE, also use it for internal representation.
UTF-8 (originally developed for Plan 9) has become the main storage encoding on most Unix-like operating systems (though others are also used by some libraries) because it is a relatively easy replacement for traditional extended ASCII character sets.
MIME defines two different mechanisms for encoding non-ASCII characters in e-mail, depending on whether the characters are in e-mail headers such as the "Subject:" or in the text body of the message. In both cases, the original character set is identified as well as a transfer encoding. For e-mail transmission of Unicode the UTF-8 character set and the Base64 transfer encoding are recommended. The details of the two different mechanisms are specified in the MIME standards and are generally hidden from users of e-mail software.
The adoption of Unicode in e-mail has been very slow. Most East-Asian text is still encoded in a local encoding such as Shift-JIS, and many commonly used e-mail programs still cannot handle Unicode data correctly, if they have any support at all. This situation is not expected to change in the foreseeable future.
Web browsers have been supporting severals UTFs, especially UTF-8, for many years now. Display problems result primarily from font related issues. In particular Internet Explorer does not render many code points unless it is explicitly told to use a font that contains them.
All W3C recommendations are using Unicode as their document character set, the encoding being variable, ever since HTML 4.0. It replaces the 8-bit ASCII superset ISO-8859-1, which had been the standard character set and encoding before.
Although syntax rules may affect the order in which characters are allowed to appear, both HTML 4 and XML (including XHTML) documents, by definition, comprise characters from most of the Unicode code points, with the exception of:
- most of the C0 and C1 control codes
- the permanently-unassigned code points D800DFFF
- any code point ending in FFFE or FFFF
These characters manifest either directly as bytes according to document's encoding, if the encoding supports them, or users may write them as numeric character references based on the character's Unicode code point.
For example, the references
냻 (or the same numeric values expressed in hexadecimal, with
&#x as the prefix) display on browsers as Δ, Й, ק, م, ๗, あ, 叶, 葉 and 냻. If the proper fonts exist, these symbols look like the Greek capital letter "Delta", Cyrillic capital letter "Short I", Hebrew letter "Qof", Arabic letter "Meem", Thai numeral 7, Japanese Hiragana "A", simplified Chinese "Leaf", traditional Chinese "Leaf", and Korean Hangul syllable "Nyaelh", respectively.
In HTTP requests, URLs must be percent-encoded, usually using the UTF-8 encoding to represent Unicode.
Free and retail fonts based on Unicode are commonly available, since TrueType and OpenType support Unicode. These font formats map Unicode code points to glyphs.
Thousands of fonts exist on the market, but fewer than a dozen fonts sometimes described as "pan-Unicode" fonts attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e. font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
Several subsets of Unicode are standardized: Microsoft Windows since Windows NT 4.0 supports WGL-4 with 652 characters, which is considered to support all contemporary European languages using the Latin, Greek or Cyrillic script. Other standardized subsets of Unicode include MES-1 (335 characters) and MES-2 (1062 characters) (CWA 13873:2000, Multilingual European Subsets in ISO/IEC 10646-1).
Rendering software which cannot process a Unicode character appropriately most often display it as only an open rectangle, or the Unicode "Replacement Character" (U+FFFD, �), to indicate the position of the unrecognized character. Some systems have made attempts to provide more information about such characters. The Apple LastResort font will display a substitute glyph indicating the Unicode range of the character and the SIL Unicode fallback font will display a box showing the hexadecimal scalar value of the character.
Multilingual text-rendering engines
- Uniscribe Windows
- Apple Type Services for Unicode Imaging new engine for Macintosh
- WorldScript old engine for Macintosh
- Pango Open Source, used by GTK+ (and hence GNOME)
- ICU Layout Engine Open Source
- Graphite (Open Source renderer from SIL)
- Scribe Open Source renderer from Trolltech
Because keyboard layouts cannot have simple key combinations for all characters, several operating systems provide alternative input methods that allow access to the entire repertoire.
In Microsoft Windows (since Windows 2000), the "Character Map" program (Start/Programs/Accessories/System Tools/Character Map) provides rich-text editing controls for all Table I characters up to U+FFFF, by selection from a drop-down table, assuming that a Unicode font is selected. Word processing programs such as Microsoft Word have a similar control embedded (Insert/Symbol). Rather more painfully and where the code point of the desired character is known, it is possible to create Unicode characters by pressing
Alt + #, where # represents 0 followed by the decimal code point; for example,
Alt + 0241 will produce the Unicode character ρ. (The # must start with 0 to be considered a Unicode code point and the keys on the numeric pad of the keyboard must be used.) This also works in many other Windows applications, but not in applications that use the standard Windows edit control, and do not make any special provisions to allow this type of input. See Alt codes. To add Unicode characters to chart titles in Microsoft Excel first type the title text into a worksheet cell, where the (Insert/Symbol) control can be used. The resulting text can be cut and pasted into chart titles.
Apple Macintosh users have a similar feature with an input method called 'Unicode Hex Input', in Mac OS X and in Mac OS 8.5 and later: hold down the Option key, and type the four-hex-digit Unicode code point. Inputting code points above U+FFFF is done by entering surrogate pairs; the software will convert each pair into a single character automatically. Mac OS X (version 10.2 and newer) also has a 'Character Palette', which allows users to visually select any Unicode character from a table organized numerically, by Unicode block, or by a selected font's available characters. The 'Unicode Hex Input' method must be activated in the International System Preferences in Mac OS X or the 'Keyboard' Control Panel in Mac OS 8.5 and later. Once activated, 'Unicode Hex Input' must also be selected in the Keyboard menu (designated by the flag icon) before a Unicode code point can be entered.
GNOME provides a 'Character Map' utility (Applications/Accessories/Character Map) which displays characters ordered by Unicode block or by writing system, and allows searching by character name or extended description. Where the character's code point is known, it can be entered in accordance with ISO 14755: hold down Ctrl and Shift and enter the hexadecimal Unicode value, preceded by the letter U if using GNOME 2.15 or later. The input code is an UTF-32 value. For example, type
Ctrl+Shift+100050 to type a character in Unicode private plane 17.
At the X Input Method or GTK+ Input Module level, the input method editor SCIM provides a raw code input method to allow the user to enter the 4-digit hexadecimal Unicode value.
All X Window applications (including GNOME and KDE, but not only them) support using the Compose Key. For keyboards which do not have a designated Compose key, another key (e.g., CapsLock) could be redefined as a Compose key.
The Linux console allows Unicode characters to be entered by holding down Alt and typing the decimal code on the numeric keypad. (In order for this to work, the console should be placed in Unicode mode with
unicode_start(1) and a suitable font selected with
setfont(8).) The AltGr key allows the hexadecimal code to be entered instead, using NumLock-Enter as A-F (clockwise). ISO 14755 compliant input (Ctrl+Shift+hexadecimal code on normal keys) is also available in the
The Opera web browser in version 7.5 and over allows users to enter any Unicode character directly into a text field by typing its hexadecimal code, selecting it, and pressing
Alt + x.
To input a Unicode character in a text box in Mozilla Firefox on Linux, type the hexadecimal character code while holding down the control and shift keys.
In the Vim text editor, Unicode characters can be entered by pressing CTRL-V and then entering a key combination. For example, an em-dash can typically be entered by typing CTRL-V, then "u2014". For more information, type "
:help i_CTRL-V_digit" in Vim. (Note that the entered text will be Unicode only if the current encoding is set to UTF-8 or another Unicode encoding; type "
:help encoding" in Vim for details.) Many Unicode characters can also be entered using digraphs; a table of such characters and their corresponding digraphs can be obtained using the "
:digraphs" command (again provided the current encoding is set to Unicode).
WordPad and Word 2002/2003 for Windows additionally allow for entering Unicode characters by typing the hexadecimal code point, for example 014B for ŋ, and then pressing
Alt + x to substitute the string to the left by its Unicode character. Usefully, the reverse also applies: if a user positions a cursor to the right of a non-ASCII character and presses
Alt + x, then the Microsoft software will substitute the character with the hexadecimal Unicode code point.
Several visual keyboards are available that make entering Unicode characters and symbols very easy.
- Quick Key (Open Source)
- Lightweight Unicode Map/Picker (In-browser character map; operating-system independent. Open Source)
- PopChar Demo Version
- Unicode Character Pickers Web based. Particularly useful for working with scripts you don't know.
Some people, mostly in Japan, oppose Unicode in general, claiming technical limitations and political problems in its operation. People working on the Unicode standard regard such claims simply as misunderstandings of the Unicode standard and of the process by which it has evolved. The most common mistake, according to this view, involves confusion between abstract characters and their highly-variable visual forms (glyphs). However, contrary to its policy, the Standard has also included characters which have merely stylistic difference such as ligatures. The rationale behind this, is the policy that one to one mappings must be provided between characters in existing legacy character sets and characters in Unicode, to facilitate conversion to Unicode.
Some have decried Unicode as a plot against Asian cultures perpetrated by Westerners with no understanding of the characters as used in Chinese, Korean, and Japanese, despite the presence of a majority of experts from all three regions in the Ideographic Rapporteur Group (IRG). The IRG advises the consortium and ISO on additions to the repertoire and on Han unification, the identification of forms in the three languages which one can treat as stylistic variations of the same historical character. Han unification has become one of the most controversial aspects of Unicode.
Unicode is criticized for failing to allow for older and alternate forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names, although it follows the recommendations of Japanese language scholars and of the Japanese government. There have been several attempts to create an alternative to Unicode. Among them are TRON (although it is not widely adopted in Japan, some, particularly those who need to handle historical Japanese text, favor this), and UTF-2000.
It is true that many older forms were not included in early versions of the Unicode standard, but Unicode 4.0 contains more than 70,000 Han characters, far more than any dictionary or any other standard, and work continues on adding characters from the early literature of China, Korea, and Japan. Some argue, however, that this is not satisfactory, pointing out as an example the need to create new characters, representing words in various Chinese dialects, more of which may be invented in the future.
An alternative way, pursued by people like Chu Bong-Foo, uses encoding which provides information on the radicals making up Han characters. For example, a 1991 Chinese computing system by Chu already provides 60,000 Han characters support, and takes up only 80KB memory space for the generation of glyphs from raw Cangjie codes.
Their argument against Unicode is that the Unicode approach to Han characters is the same as assigning every English word with a separate code.
Thai language support has been criticized for its illogical ordering of Thai characters. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way. This ordering problem complicates the Unicode collation process.
Indic Scripts such as Tamil and Telugu are each allocated only 128 slots of the Unicode space, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of compound characters out of components. Local scholars are arguing in favor of an assignment of Unicode codepoint to compound characters. This will most likely not happen, as can be seen of the case of Tibetan script where even the Chinese National Standard organization failed to achieve a similar change.
Opponents of Unicode sometimes erroneously claim even now that it cannot handle more than 65,535 characters, even though this limitation was removed in Unicode 2.0.
- In 1997 Michael Everson made a proposal to encode the characters of the artificial Klingon language in Plane 1 of ISO/IEC 10646-2. The Unicode Consortium rejected this proposal in 2001 as "inappropriate for encoding" not because of any technical inadequacy, but because users of Klingon normally read, write and exchange data in Latin transliteration. Now that some enthusiasts are blogging in tlhIngan pIqaD (Klingon alphabet) using newly available fonts and keyboard layouts, the possibility of reapplying to ISO has been raised.
- Proposals suggested the inclusion of the elvish scripts Tengwar and Cirth from J. R. R. Tolkien's fictional Middle-earth setting in Plane 1 in 1993. The Consortium withdrew the draft to incorporate changes suggested by Tolkienists, and as of 2005 it remains under consideration.
- Both Klingon and the Tolkien scripts have assignments in the ConScript Unicode Registry.
- In 2005, the 100,000th character to be entered into the pipeline for standardisation was the MALAYALAM PRASLESHAM. It was encoded based on the contribution by Rachana Akshara Vedi.
- The April Fools' Day RFC of 2005 specified two "parody" UTF encodings, UTF-9 and UTF-18.
- Unicode reference (wikibooks)
- Comparison of Unicode encodings
- Free software Unicode typefaces
- Mapping of Unicode characters
- Universal Character Set
- Alt codes
- The Complete Manual of Typography, James Felici, Adobe Press; 1st edition, 2002
- Unicode Demystified: A Practical Programmer's Guide to the Encoding Standard, Richard Gillam, Addison-Wesley Professional; 1st edition, 2002
- Unicode Explained, Jukka K. Korpela, O'Reilly; 1st edition, 2006
Books about Unicode
- The Unicode Standard, Version 5.0, Fifth Edition, The Unicode Consortium, Addison-Wesley Professional, Oct. 27, 2006. ISBN 0-321-48091-0
- The Unicode Standard, Version 4.0, The Unicode Consortium, Addison-Wesley Professional, Aug. 27, 2003. ISBN 0-321-18578-1
- The Unicode Consortium
- Unicode versions: 3.1, 3.2, 4.0, 4.0.1, 4.1 5.0.0
- new characters, scripts and characters and scripts under investigation
- Code Charts (PDF files)
- The Unicode Consortium
- Character information
- decodeunicode Unicode-Wiki with 50.000 gifs in three sizes. English/German.
- Unicode Character Search (search for characters by their unicode names)
- The Letter Database Uses forms to present groups in list or grid format by hexadecimal.
- Unicode Code Converter v3
- Insert characters instantly with Quick Key Character Grid.
- A suite of programs for finding out what is in a Unicode file
- Programs for converting between Unicode and various ASCII representations
- Table of Unicode characters from 1 to 65535
- Example text files using Unicode
- Unicode special character map resembles the Windows version. Click a symbol to obtain either the named or numeric code for HTML.
- ConScript Unicode Registry a project to standardize part of the Private Use Area for use with artificial scripts and artificial languages. An explanation of how to propose character names in Unicode is available here.
- What is Unicode?
- Tim Bray's Characters vs Bytes explains how the different encodings work.
- Alan Wood's Unicode Resources Contains lists of word processors with Unicode capability; fonts and characters are grouped by type; characters are presented in lists, not grids.
- Seeing the entirety of Unicode printed out as a single large poster gives a good feel for the size of the code.
- The secret life of Unicode "A peek at Unicode's soft underbelly" Describes problems requiring resolution. Includes links to Unicode resources.
- A harshly critical article about Unicode, and a response to it (n.b.: This article is dated 2001, and much has changed regarding Unicode since that time)
- Software development
- International Components for Unicode (ICU) An open source set of libraries that provide robust and full-featured Unicode services for your applications on a wide variety of platforms.
- utf8proc - A free library for Unicode normalization, case-folding and stripping/mapping of certain characters.
- The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky of JoelonSoftware.com (this is from 2003 and now outdated, but still a reasonable starting point).
- Supplementary Characters in the Java Platform from Sun Microsystems
- JSR 204 Unicode 3.1 supplementary character support Java Specification Request
- Unicode support information
- Freedesktop.Org's Project UTF-8's purpose is to document and promote proper Unicode support in free and Open Source software.
Categories: Articles with unsourced statements | Character encoding | Unicode