User:MarkAHershberger/Weekly reports/2010-W11

=Search=

I spent the most of the week cleaning up a few search normalization problems. In particular, I discovered an off-by-one error in the MySQL-based search such that the full-width, lowercase “ｚ” was not normalized but was, instead, encoded.

Another problem that I failed to find a satisfactory solution to was the one I mentioned in last week's weekly review: encoding of Unicode characters. Since the problem is with MySQL FULLTEXT search that the WMF does not use, I won't spend any more time on it except to summarize it here (and on the tech mailing list to solicit possible input).

MySQL FULLTEXT Search and Unicode
For a summary of the problem, see Escaping MySQL utf8 encoding from last week. A solution would be to find an escape character that we can pass to MySQL, something that MySQL's FULLTEXT search would see as a word character but that would not normally be produced by MediaWiki. The current solution—using u8 to escape a hexadecimal representation of the UTF8 codepoints—can be easily overcome by typing u8 sequences into the text of the page.

I eventually found the following hack: Since MediaWiki treats the text as UTF8 encoded, but MySQL's FULLTEXT engine and tables assume the text is latin1 (ISO/IEC-8859-1), use a latin1 “word” character as the escape sequence. Say À (&amp;#192; or UTF8 0xC3 0x80) is chosen as the escape character. Then the Gregorian letter “Ⴀ” (UTF8 0xE1 0x82 0xA0) would be encoded as “ÀC380À” instead of “u8C380”.

This would work except that we set the interface to mysql into UTF8 mode from the start, so a latin1 À character can't really be sent to the database as-is.

Lucene Search Normalization
I also discovered a problem with Lucene's search normalization while testing some of TimStarling's changes. Specifically, I found that fullwidth numbers were not being normalized. After getting some input from rainman-sr, I was able to update the code to fix the problem. Rainman-sr also gave me access on SourceForge to upload a new release. I'm not sure of any other changes that are needed.

Normalizing fullwidth characters across all languages
Currently fullwidth characters are only normalized in Japanese and Chinese. It makes sense to normalize characters in all languages if that is possible. TimStarling said he would be ok with this as long as it was fast enough.

I found that the currently used normalization (preg_replace) shows what looks like O(n2) behavior. A str_replace solution that I found seems much closer to O(n) behavior

=Upcoming work=

This week, I'll commit and document the speed difference in the fullwidth normalization behavior. After that, I'll hope to finish creating a Firefogg extension to replace the Chunked Uploading that was rejected for 1.16.