Talk:Operational transformation

Untitled

 * Please, do not cite work submitted that is not verifiable.
 * please, discuss before changing heavily the article...
 * If you want to cite new algorithms, please update the table of OT algorithms with verifiable bibliographic reference...

Momo54 (talk) 21:12, 2 June 2009 (UTC)

Removed Criticisms of OT
The following sections have been removed from the main article. The so-called criticisms are either opinions or unfounded speculations. Without credible references to these opinions or consensus amongst experts, they are not appropriate in the spirit of Wikipedia (refer to Verifiability and Neutral_point_of_view). Further discussions, however, are welcome.

nusnus (talk)

''==Criticisms to OT==


 * Transformation functions are difficult to write and prove.

This criticism can be mitigated along with the development of the OT framework. They just need to use the original simple transformation functions proposed in Ellis and Gibbs 1989. In their approach, transformation functions do not need to satisfy the above-explained TP2 condition at all. That's why they can be as simple as they should be.


 * I don't understand the point. Imine et al, Ressel et al, Sun et al claim that Ellis/Gibbs transformations are not correct. —Preceding unsigned comment added by Gritzko (talk • contribs) 17:51, 30 June 2010 (UTC)
 * Only some of the authors alluded to above are claiming that "OT cannot be made correct", leading to their development of so-called "post-OT" methods. The other authors argued that the OT at-their-time did not meet the correctness criteria for a class of applications of interest, and thus created new schemes that did so. The correctness of OT is application oriented. That point has been discussed in the article. Nusnus


 * What is the formal definition for OT "intentions"?

In the alternative correctness criteria proposed in the CA model, "intention" is no longer a concern. What is important here is really that an OT algorithm can be proved against well-formalized correctness criteria, whether it is called "admissibility" or "intention preservation". Some researchers believe that the notion of admissibility is a reasonable definition of intention preservation. However, this is to be agreed upon by the research community.


 * There are many other ways to merge data and OT is too complex.

OT is not complex in terms of efficiency. OT looks complicated to people who are not familiar with the topic. It is generally true that it takes much time and effort for a person to learn an unfamiliar subject. Being complex and looking complicated are two different concepts in computer science.


 * Is the theoretical framework completely sound?

Yes! The theoretical frameworks proposed by Li et al include both well-formalized correctness criteria and sound approaches to design OT algorithms and prove their correctness. Their recent work also did quite some optimizations to improve the efficiency of OT algorithms so that they can work well on resource-constrained devices and platforms such as cell phones and browsers. ''

I return the "criticisms" section back. There are definitely some criticisms, they definitely point at some problems and there are no reason to deny their existence. Gritzko (talk) 09:18, 25 June 2010 (UTC)

I moved the critiques back here because the material (both previous and the current) version contain in-precise and erroneous statements that just muddle up the picture. Nusnus (talk) 09:18, 25 June 2010 (UTC)

By the way, I do not understand the following passage: "In their approach, there is no need to satisfy transformation properties such as TP2 because it does not require that the (inclusive) transformation functions work in all possible cases." Does it mean, that the framework is 'correct', but only in 'some cases'? Either the idea is poorly expressed or it is outward ridiculous. A theorem cannot be "correct in most of the cases". Gritzko (talk) 09:18, 25 June 2010 (UTC)

Some Recommendations
In the spirit of neutrality and verifiability of Wikipedia, we have adhered to the following general guidelines, which we recommend to all contributors:
 * avoid using comparative rhetoric, i.e., statements suggesting that one particular approach/algorithm/method is better/simpler/more efficient/more correct compared to another should be avoided. These statements only serve to incite nonconstructive controversy (refer What_Wikipedia_is_not)and project a negative and confusing image of the subject matter to the general public.
 * in a similar vein, avoid making statements describing your view on what's easy, what's hard, what's good and what's not -- you may hold these professional opinions, but Wikipedia is not the place for them (refer to Neutral_point_of_view).
 * Describe what you did in simple clear terms, let the reader make their own opinions and views.
 * Other than the pioneer work from Ellis, avoid direct mentioning of particular algorithms/systems and the names of their inventors -- state what the work did and leave the other information to citations and references.

Note: these guidelines are open for opinions and debate.

nusnus (talk)

''===How to Design OT Algorithms=== In general, there are two approaches to developing OT algorithms. In the more widely understood approach, they reuse a generic control procedure and focus on design of application-specific transformation functions. Most works along this line require that the (inclusion) transformation functions satisfy two properties, TP1 and TP2, which essentially say that the functions work in all possible cases (or arbitrary transformation paths). This has turned out extremely difficult to achieve over the past 10+ years' of design practices in the research community. In the most recent work of Sun et al, they have chosen to take a different design approach.
 * In the sprite of the above, the following has been flagged as debatable subject matter.

...

'''Li et al's work considers that satisfying TP1/TP2 conditions is a hard path to success. The two transformation properties, TP1 and TP2, were important in the early days of OT because they were debatably considered as sufficient conditions to convergence. Therefore proving TP2 at most shows that an OT algorithm achieves convergence.''' It is well known due to Sun et al that convergence is not enough in collaborative applications such as group editors. The converged data must be further constrained by conditions such as intention preservation and admissibility. In the new frameworks, they do not discuss TP1/TP2 at all because they no longer require the transformation functions work in all possible cases (i.e., arbitrary transformation paths). This requirement would make the design of transformation functions (and hence OT algorithms) unnecessarily complicated. ''

the meaning of the term "Operational transformation"?
I'm not native English speaker, and now I can' understand - the meaning of the term "Operational transformation".
 * Is it about base idea of transforming some document by the stream of many small single operations
 * Or is it about secondary idea about need to transform operations that is transforming a text to have correct result? --Nashev (talk) 11:00, 27 June 2009 (UTC)

Microsoft Groove
From what I'm able to tell, Microsoft Groove uses a somewhat unique method of operational transformation. For each operation executed on a particular dataset, Groove stores information needed to revert that operation. When a change arrives that logically occurred before the most recent change, instead of transforming that change, it reverts all changes up to where that change should occur, applies that change, and then re-performs all of the changes again. This seems like it's based on the principle of operational transformation, but by reverting stuff instead of by transforming stuff. Is this still considered OT? I ask because if it is I want to add Groove to the list of products using OT.

If not, is there a name for what Groove uses? —Preceding unsigned comment added by Javawizard (talk • contribs) 18:35, 10 April 2010 (UTC)

Document Dimensionality
In the section on OT Software there are 3 groups based on (in part) document dimensionality, but no other discussion in the text to clarify the meaning in this context. My guess is that text documents are considered 1-dimensional because they are strings of characters. But the 2d and 3d examples (Word/Powerpoint and Maya models respectively) could similarly be encoded or serialized in a 1d representation. Are the transformations in their cases actually operating on multidimensional objects or is this statement of dimensionality entirely superfluous to the topic of OT? —Preceding unsigned comment added by 137.244.215.63 (talk) 15:14, 15 April 2010 (UTC)

OTFAQ: Operational Transformation Frequently Asked Questions and Answers
You may find answers to most questions/issues discussed here from this OTFAQ. - Dchen (talk) 00:39, 26 June 2010 (UTC)

Correctness Discussion
I've started this section in discussion for people interested in correctness issues. Grtizko, I'm not against valid critiques. Dongame, please add your thoughts in the discussions page, first. —Preceding unsigned comment added by Nusnus (talk • contribs) 17:42, 6 November 2010 (UTC)
 * First of all, you removed the entire section, like NO criticisms of OT ever existed. This is already the second time the section gets deleted (see above). As criticisms of OT are present in the real world, so this section should be present here. You are free to polish the content, but you cannot deny existence of criticisms, esp. the published research on the topic. Gritzko (talk) 12:54, 9 November 2010 (UTC)
 * Again, you talk of "real-work" problems/criticisms, yet do not provide concrete examples or references to relevant articles. Are these your own views or are they backed by literature? The two references you cited contain self-invalidating and contradictory results and claims (see my points below). Stating criticisms without valid grounds are hardly in the spirit of neutrality and verifiability as advocated by Wikipedia. Nusnus (talk) 11:10, 13 November 2010 (UTC)

Critique of OT
While the classic OT approach of defining operations through their offsets in the text seems to be simple and natural, real-world distributed systems raise serious issues. Namely, that operations propagate with finite speed, states of participants are often different, thus the resulting combinations of states and operations are extremely hard to foresee and understand.
 * This section makes no sense.

Nusnus (talk) 09:18, 30 Oct 2010 (UTC)
 * 1) What's the meaning of "Classical" OT (like, as opposed to "New Age")? It's certainly a term not coined by the research community and framing it as such is misleading.
 * 2) ."real-world distributed system": give examples where these serious issues present themselves in "real-world systems".
 * 3) .Yes, there is delay in operation propagation and document states do diverge before those operations arrive and are integrated, but that's precisely the problem OT is designed to solve.
 * Well, this is my attempt to explain difficulties of OT in layman's terms. You may do better. The problem of OT is that the model always pulls the carpet form under its own feet. It defines operations through the offsets, but operations change that offsets. Cycle. Gritzko (talk) 12:54, 9 November 2010 (UTC)
 * Firstly, the statements regarding what OT is and is not are your own points of view without the backing of any reputable reference whats so ever, and thus are valid grounds for deletion. Secondly, characterizing OT as "operations with offsets" is so grossly inaccurate that it's no better than saying that graph algorithms is about nodes and edges. This makes one wonder if you bothered at all reading the main article and tried to understand the relevant material. When presenting oneself as an "expert", one should be very careful and precise about the statements being made -- giving a "laymen" description of a topic is no grounds for misrepresentation. Nusnus (talk) 11:10, 13 November 2010 (UTC)


 * What an excellent string of personal attacks! Gritzko (talk) 10:06, 15 November 2010 (UTC)


 * Just stating and explaining the reasons for a. the inaccuracy of your content. b. rebuffing your excuse for writing it as such. Nusnus (talk) 22:04, 15 November 2010 (UTC)

The attempt to apply automatic theorem provers to various OT models have shown that, as a rule, they are incorrect and cause divergence of copies in certain


 * Using a theorem prover by no means imply the results obtained are correct. Too often these so-called "formally proven" OT functions are actually incorrect and disproven solutions are actually correct. Why? To guarantee the theorem verifiers are correct you'd need another meta-verifier to prove those verifiers are correct in specification, criteria, and implementation. The majority OT solutions are correct because they meet the correctness requirements of their target applications and major system milestones were had without the aid of theorem verifier. Nusnus (talk) 09:18, 30 Oct 2010 (UTC)


 * Well, maybe Molli et al are totally incorrect with their theorem prover, but you should use some reference to some source that proves it is the case. Just brushing it away because you dislike theorem provers is a bit too much. AFAIR, they provide very concrete examples that crush very particular OT schemes. Gritzko (talk) 12:54, 9 November 2010 (UTC)


 * OK. Read "An Approach to Ensuring Consistency in Peer-to-Peer Real-Time Group Editors", Computer Supported Cooperative Work (CSCW) Volume 17, Numbers 5-6, 553-611. A simple counter example is given for the transformation functions in your citation -- the same supposedly theorem proved and verified transformation functions which the researchers used to triumphantly proclaim that all existing work had been "incorrect". Your second reference, by the same group of researchers, in fact self contradicts their own earlier work (Section 5,  the authors openly invalidates the first reference). So, again, I recommend checking the literature and your own references before accusing anyone of bias. Nusnus (talk) 11:10, 13 November 2010 (UTC)


 * Thank you very much for your reading advice! Now try to justify why the following citation should not be mentioned as criticism, otherwise I will add it (to the recovered section). Citation: It turns out very difficult to design and prove transformation functions that verify TP2, as has been confirmed repeatedly in the literature (Ressel et al. 1996; Suleiman et al. 1997; Sun et al. 1998; Imine et al. 2003; Li and Li 2008a; Oster et al. 2005a). Due to the need to consider complicated case coverage, formal proofs are very complicated and error-prone, even for OT algorithms that only treat two characterwise primitives (insert and delete) (Li and Li 2008a). The work by Molli and colleagues (Molli et al. 2003; Oster et al. 2005a; Imine et al. 2006; Oster et al. 2006b) resorts to theorem provers and tries to automatically prove TP1 and TP2. According to (Ressel et al. 1996), TP1 and TP2 are sufficient conditions for convergence. That is, even if TP1 and TP2 are proved, we can only conclude that an algorithm achieves convergence but cannot draw any conclusion about intention preservation. (An Admissibility-Based Operational Transformation Framework for Collaborative Editing Systems, Du Li and Rui Li)   Gritzko (talk) 10:06, 15 November 2010 (UTC)


 * Li's work cited above simply states the following two facts:
 * TP1 and TP2 are neither necessary nor sufficient for OT correctness (the last two statements of the citation).
 * To preserve TP2 at the function level is a hard and potentially error prone approach (the first three statements of the citation).


 * In fact, many of the OT solutions support CP2/TP2 at the transformation control algorithm level (which is easy to prove), so they DO NOT require CP2/TP2 at the transformation function level for convergence (see section on OT control algorithms). Thus, CP2/TP2 is a non-issue for a large number of OT solutions and they are ALL correct with-respect to this property. If you don't understand why this is the case, read the section on Transformation properties and related literature. Here a reading list to get you started: [4][28][11][3][23][24][17].
 * --The End--


 * P.S It is not my responsibility to educate you on the subject matter-- it's yours. So, please, do so before you request a "justification/explanation" for snippets taken from other people's work. Also the literature set is a golden chest to find answers to your puzzles Nusnus (talk) 22:04, 15 November 2010 (UTC)


 * The question was: why don't you consider it a criticism. I asked because you removed the "Criticisms" section (for a second time already, right?) like no criticisms ever existed. Meanwhile, OT is extensively criticized, esp. in the OT literature itself. I conclude that you are a bit inadequate. Also, please stop posing like you are my prof. You are (1) not mine (2) not a prof, AFAIU. Gritzko (talk) 14:07, 16 November 2010 (UTC)


 * Stay on the subject matter and avoid hand-waving rhetoric. The onus is on you to explain why you decided to read these two facts as "criticisms":
 * 1. TP1 and TP2 are neither necessary nor sufficient for OT correctness (the last two statements of the citation). 
 * 2. To preserve TP2 at the function level is a hard and potentially error prone approach (the first three statements of the citation).
 * The first point is a plain characterization of the role that the two properties TP1 and TP2 play in OT.
 * The second point says that a particular approach to designing OT systems can be hard. A lot of existing OT systems avoided TP2 by adopting simpler approaches (read [4][28][11][3][23][24][17]).
 * So....... Where is the criticism?


 * The two citations in your original post contained self-contradictions, esp on the core subject of OT correctness. You didn't object. The critique post was argued around those two citations, so the arguments were flawed and removed. What's there to argue?


 * You say: Meanwhile, OT is extensively criticized, esp. in the OT literature itself. Check citations for consistency, cite, and explain details. If not, this conversation can stop. Nusnus (talk) 19:10, 16 November 2010 (UTC)


 * CITATION (last 2 sentences): "TP1 and TP2 are sufficient conditions for convergence. That is, even if TP1 and TP2 are proved, we can only conclude that an algorithm achieves convergence but cannot draw any conclusion about intention preservation." YOU: "TP1 and TP2 are neither necessary nor sufficient for OT correctness (the last two statements of the citation)". I think, you interpret stuff rather broadly and wishfully. I return the section back. Gritzko (talk) 14:33, 17 November 2010 (UTC)


 * It is universally known that convergence does not guarantee correctness([1][5]), and the last two sentences of the citation reiterates this fact.
 * Original citation: According to (Ressel et al. 1996), TP1 and TP2 are sufficient conditions for convergence..... You left out the reference.
 * What does it mean? In this particular algorithm (Ressel et al. 1996)[2], TP2 was a necessary condition for convergence, but TP2 is not generally required for OT and was not needed in other algorithms. This is a fact, not an interpretation(see [4][28][11][3][23][24][17]).
 * Nusnus (talk) 16:12, 17 November 2010 (UTC)

The correctness problems of OT led to introduction of transformationless post-OT schemes, such as WOOT, Logoot and Causal Trees (CT). "Post-OT" schemes workaround the need to transform operations by employing a combination of unique symbol identifiers, vector timestamps and/or tombstones.


 * These citations are not relevant to OT. You may start a different page/s about these topics as you see fit. Nusnus (talk) 09:18, 30 Oct 2010 (UTC)


 * Quite relevant. Originally, those are OT improvement efforts. Those models decompose a document into atomic editing operations, but they do not use offsets to identify application points. There are no reliable family name for it yet, e.g. Molli et al call it "without OT" (WOOT), operations without transformations. Gritzko (talk) 12:54, 9 November 2010 (UTC)