Talk:One-instruction set computer

URISC
In my opinion, "The Ultimate RISC" is a better entry point for the notions discussed in the two articles on URISC and OISC. For one thing, the URISC concept predates that of OISC and it has been published in a peer-reviewed journal with an archival record of the design and its justifications. As far as I can tell, OISC is documented only in web pages. For another, the term OISC (one instruction set computer) is grammatically incorrect. Its correct interpretation is "a computer that has one instruction set" (i.e., any computer), not a computer that has a single instruction. In reply to comments on usefulness of the concept, the URISC article clearly states that URISC is a pedagogical tool, rather than the design for a viable computer. It is helpful to think about issues in computer hardware design (at an introductory level) without the clutter arising from the definition of many instructions. Bparhami 04:29, 1 December 2007 (UTC)


 * "The URISC concept" is the same thing as "that of OISC", so it can't predate it. The URISC, as documented in the paper referred to by the URISC page, is a computer with a "subtract and branch if negative" instruction; that's one form of computer with an instruction set with one instruction.  You can, if you choose, debate what name the Wikipedia page for the concept of a computer with only one instruction can be called, but it's pretty clear that the concept deserves only one Wikipedia page, not two.  I suspect most papers, Web sites, etc. call it "OISC"; if so, that's the right name for it.


 * I don't remember when I first encountered the concept of a single-instruction computer, so I don't know whether it existed before the U of Waterloo paper.


 * Googling for "one instruction set computer" turned up many links, including a Google Books link for the book Computer Architecture: A Minimalist Perspective by William F. Gilreath and Phillip A. Laplante, discussing both "subtract and branch if negative" and "move", so, clearly, it's not documented only in Web pages.


 * One could argue that the term is grammatically incorrect, but people use it, probably because its abbreviation matches the regular expression "[A-Z]ISC". Guy Harris 07:36, 1 December 2007 (UTC)


 * According to W. F. Gilreath and P. A. Laplante ( Computer Architecture: A Minimalist Perspective, 2003, p.51), an OISC was first described in 1956/59:"A one instruction computer was first described by van der Poel in his thesis, 'The Logical Principles of Some Computers' [1956] and in an article 'Zebra, a simple binary computer' [1959]. van der Poel's computer called ZEBRA ... was a subtract and branch if negative machine." Concerning grammatical correctness, I suppose OISC is an acronym for "one-instruction set computer" (which is perfectly fine), but somehow the hyphen gets omitted. Still, it sees to me the preferred entry point. --r.e.s. (talk) 05:48, 23 December 2007 (UTC)

Is this useful?
I.e, is there a point in building computers with this technique? —Preceding unsigned comment added by 213.65.121.87 (talk • contribs) 13:54, 9 March 2005 (UTC)


 * See Esoteric_programming_languages
 * "Usability is rarely a high priority for such languages. The usual aim is to remove or replace conventional language features while still maintaining a language that is Turing-complete. Thus, by adhering to some principles while deliberately making no sense as a whole, these languages are perhaps the programming equivalent of nonsense verse."  —Preceding unsigned comment added by 82.35.85.74 (talk • contribs) 16:18, 17 May 2005 (UTC)


 * Well, it can theoretically do quite a lot, and in fact it can probably do anything at all. It's just harder to do everything :-) --Ihope127 16:10, 26 August 2005 (UTC)


 * Interesting question.
 * What about logarithm, I mean before few thousands of them where tabulated?
 * And Babbage's machines, which I am sure Ada Lovelace would have started debugging with the passion of a pioneer?
 * And FORTAN: how many time did it integrate bravely  in a manageable handful set of punch cards before someone eventually thought of wasting 0.032 seconds of its Wednesday compile time budget to print   somewhere in the twenty page cabalistic listing.


 * Yes, this is actually a very enlightening subject when trying to think about how computers actually work. Furthermore, there may be an argument about designing the simplest possible computer hardware (Nanoscale computers anyone??) --216.204.206.146 21:03, 2 February 2007 (UTC)

--

About the suggested merges, I strongly advocate in favor
 * OISC, SBN and URISC are minor variants. In facts, three out of 8 possible (and equivalent) options. In my opinion, the less user-unpractical is an (I think) undocumented reverse-subtract-and-branch-if-negative-or-zero.
 * The distinction is however spurious because, if OISC come to practical life, humans will probably never code more than the one needed interface instruction.
 * Moreover, nothing grants that the type of variant actually used, or produced, will be quanticaly observable.
 * AlainD 23:29, 1 March 2006 (UTC)

I am strongly in favor of URISC being made a subsection of OISC. OISC should refer to all possible machines with one instruction, and all should be in one place. —Preceding unsigned comment added by 128.61.33.211 (talk • contribs) 20:53, 27 April 2006 (UTC) ---

In favor of all merges. However OISC should be the encompassing article. It referes to the framework within which all these other phenomenon exist. --66.112.246.75 04:33, 28 June 2006 (UTC)

--

I'm confused, does this page provide three examples of OISC's (therefore three instructions that could be used for such a thing?) If so, it should be reworded to say that "*A* One Instruction Set Computer is a single machine language opcode...". It should then say "There are three known such instructions that can be used to implement a OISC: Subtract and branch if negative (SUBLEQ), Reverse-subtract and skip if borrow (RSSB), and Move. -- 216.204.206.146 20:58, 2 February 2007 (UTC)

Subtract and Branch if Negative isn't possible as suggested in the article. The article suggests you can use simply the interpreter, but that isn't actually a complete computer: You need the ability to access registers and store numerical values in the program itself. There is simply no way around it. 129.210.145.73 03:39, 8 April 2007 (UTC)

I believe OISC should be merged into the (IMO more scientific and more general term) URISC. Quickly looking at citeseer, URISC is first mentioned in 1997 and another two times later, OISC is mentioned once in a 2003 technical report. Also anyone who's not a trained computer scientist please stop speculating on whether a URISC computer is practical/useful. This is besides the point. No one's asking you to translate your C programs to brainfuck and throw away your x86 and buy a URISC. It's a theoretical concept. Please comment on how/whether/why the articles should be merged... Alex.g (talk) 08:53, 14 December 2007 (UTC)

this is a joke
Somebody slapped a fresh new name on the turing machine. Probably this article should be a redirect, or at least stripped down to a simple explanation of the joke. AlbertCahalan 04:50, 27 May 2007 (UTC)


 * Well, it's become a common name, and what makes you think it's a joke? ajdlinux 06:03, 27 May 2007 (UTC)


 * See comment (5) at this webpage for my opinion about this; OISC is a lot more than a mere change of name. --r.e.s. (talk) 05:48, 23 December 2007 (UTC)

hm?..
A copy instruction can be implemented similarly: STO a, b == subleq b, b               subleq a, Z                subleq Z, b                subleq Z, Z Isn't the canonical load from right to left? As in you're moving the value at memory location B into memory location A. But the first instruction destroys B doesn't it?

Also, why: subneg a, b, c  ;Mem[b] = Mem[b] - Mem[a] ;if (Mem[b] < 0) goto c Instead of: subneg a, b, c  ;Mem[b] = Mem[b] - Mem[a] ;if (Mem[b] < 0) goto Mem[c]

The latter is far more flexible, and necessary if you split the operands and data into 4 different arrays rather than all in the same. .froth. (talk) 18:37, 18 June 2008 (UTC)


 * I've edited the article to call the simulated opcode "MOV" instead of "STO", and I've reworded to make it clearer that the operand order is "source, destination". Similarly for the ADD. Of course, if anyone feels strongly about it, the alternative ordering "destination, source" could be used, and the example changed accordingly. I think either way is fine (as both seem to be widely used), as long as it's clearly described.


 * About the direct vs. indirect goto ... The article reflects the OISC models that are described in published literature, and, as far as I know, none of them use the indirect goto.


 * --r.e.s. (talk) 23:51, 18 June 2008 (UTC)

Isn't memory mapping (TTA) a cheat?
Ie if you have a memory mapped adder and you do...

X -> ALU.OpA, Y -> ALU.AddToA, ALU.Result -> Z

You're actually using a coprocessor to do the math, like the old Weitek FPU for 386's.

I would expect there to be at least be a little controversy, if not outright name calling.

I notice that the RSSB machine also has two special zero operand instructions ... RSSBPC and RSSBACC. And the MAXQ machine actually does the transliteration in it's assembler.

The first machine looks okay though and it feels just like a "good old NAND gate".

86.0.255.130 (talk) 07:51, 13 August 2009 (UTC)


 * I tend to agree that TTA is a cheat. Look at this. It sure seems to me that the bits defining Rd in the instruction are acting like an opcode. UncleDouggie (talk) 09:32, 5 September 2009 (UTC)


 * All OISC languages can be grouped into 2 distinct types: ones that need memory mapping (such as RSSB) and the others that do not (such as Subleq). From a theoretical perspective memory mapping is cheating, but in hardware implementation it is important because memory mapping can reduce the complexity of the CPU and usually reduces the size of the OISC instruction. Subleq can be easily converted into 2 operand instruction language with a mapped instruction pointer (IP). --Mazonka (talk) 13:36, 7 September 2009 (UTC)


 * While it makes the CPU simpler it certainly makes the supporting circuitry more complex!
 * By, for instance, requiring said circuitry to fill in for the missing 90% of the cpu. It's
 * a cheat, essentially just hiding pieces of the effective CPU around the place. Why you'd
 * ever build one, I don't know. The overall system has to be more complex, expensive, annoying
 * and slower than just using a normal CPU.


 * The "subtract, branch on negative" computer is different, it's Turing complete and can
 * run programs just by itself. A bugger to program, like. But a real computer, which the
 * memory-mapped thing is not. 94.197.127.161 (talk) 02:59, 9 March 2013 (UTC)


 * There's at least one area in which memory mapped instructions can make things more practical: when there are a lot of vector or matrix operations. By having several memory mapped multiply circuits, things like scalar products get better throughput because the outer loop that is setting up the intermediate products and then collecting them to add them can leave things cooking while it gets on with the next ones. That way, there's the benefit from a certain amount of parallel architecture. There may well be other possibilities, too. PMLawrence (talk) 13:36, 10 April 2013 (UTC)

BitBitJump
It seems to me that BitBitJump is original reasearch that's out-of-place here. Also, if the definition of an OISC requires Turing completeness, then BitBitJump doesn't qualify. --r.e.s. (talk) 17:19, 4 September 2009 (UTC)
 * I had just reorganized what was in the article before. After looking at it further, I think you're right that BitBitJump may be original research because the paper defining it is only published on the author's personal website and the esolang wiki probably doesn't qualify as a reliable third-party reviewer. Although, the wiki is referenced from a WP article. We should review our sources for the other instructions as well, and update them as needed, to make sure we don't have any other original research. The other references all need major cleanup. I suggest we leave in BitBitJump temporarily until we complete the review. We may come across another reference for it and we should apply the same source standard to all the presented instructions. As for the Turing completeness of BitBitJump, there seems to be some debate. See the summary of the issue, an in-depth discussion, and this from the author's paper that maybe agrees that it isn't there yet: "Keymaker (esolangs.org user) argued that the language presented here could be considered Turing-complete only if addressing is relative, not absolute. It seems that it is possible to redefine this language to use relative addressing without changing the principle." UncleDouggie (talk) 23:00, 4 September 2009 (UTC)
 * I found an academic publication of BitBitJump and updated the article. UncleDouggie (talk) 12:23, 5 September 2009 (UTC)
 * On the OR issue, it seems doubtful that an "academic publication" of that kind (which any registered member can submit) qualifies. The situation seems quite different for the subleq instruction and some of the others, which are taught in textbooks -- I'll try to find and post at least one such textbook reference. On the issue of Turing-completeness, the author's "Revision 2" of that paper, at his personal website, admits to the unproven nature of the Turing-completeness claim even if the addressing scheme were to be changed from absolute to relative addressing. (On the discussion page at the esolang site, the author admits that the absolute addressing version -- which is the one used in the definition of the language -- is *not* Turing complete.) --r.e.s. (talk)  —Preceding undated comment added 13:17, 5 September 2009 (UTC).
 * WP:SOURCES permits academic publications. The library website says that "The contents of arXiv conform to Cornell University academic standards." I presume that means there has been some type of review by a professor. However, the article intro. says that all OSICs are Turing Complete. If BitBitJump is not Turing Complete, does it belong in the article at all? I didn't make the recent edit that reintroduced the Turing complete claim for BitBitJump. I did leave a message for that anonymous user asking them to comment in this discussion. UncleDouggie (talk) 13:30, 5 September 2009 (UTC)
 * arXiv.org is *not* peer-reviewed, but rather "a collection of moderators for each area review the submissions and may recategorize any that are deemed off-topic. [...] While the arXiv does contain some dubious e-prints [...] arXiv generally re-classifies these". A topical review by moderators is not the same as peer-review, of course; more importantly, time must be allowed for a "dubious"  paper to be either corrected or identified as such.  The content of an e-print may require extensive corrections before it's either accepted for peer-reviewed publication or recognised as "dubious".  (Witness the revisions already seen in the bbj author's paper.)  On the subject of references, I eliminated the "Sources" section per the cleanup tag, converting them either to References or moving them to the External Links section.  In my opinion, if the bbj references added recently are (self-promotional?) Original Research links, then they should at best be removed to External Links; but, for now, I'm leaving them in References.--r.e.s. (talk) 13:04, 7 September 2009 (UTC)


 * The paper on BitBitJump is currently under review to be published in “Complex Systems” journal..
 * The article should not just say that BitBitJump is not TC, because as for many other assembly languages its processor implementation is not TC, but the assembly notation of the language is.
 * The article is incorrect on how it defines OISC language. OISC is a type of RISC, and RISC is a type of computer CPU, and most (if not all) of computer CPUs are not Turing-complete. Some languages, defined as a set of processor commands, may (or may not) be TC. Subleq language, for example, (do not mix with the Subleq OISC) is TC when defined irrespective to its processor with abstract integer operand.  Reference to the completeness of processor commands (Universal computer quality) is usually a requirement that the processor is capable to execute Turing-complete languages – a program written in TC language can be compiled into processor language and run on that processor.--Mazonka (talk) 13:27, 7 September 2009 (UTC)


 * The issue of whether BitBitJump is TC in the strict sense of the term has nothing to do with "its processor implementation", nor with any "assembly notation". It is about the BBJ language itself.  If there exists no BBJ program capable of simulating a Universal Turing Machine (and there is none), then BBJ is simply not TC.  Similarly, if there is no BBJ program capable of simulating an arbitrary LBA with arbitrary input (and there is none), then BBJ is not in that class either.  The overwhelmingly persuasive fact relative to these questions is that every BBJ program can access at most a fixed finite amount of storage (depending on the assumed wordsize of the abstract machine), which sets it quite apart from the Turing-complete  OISCs in the article. --r.e.s. (talk) 13:38, 15 September 2009 (UTC)
 * It's an interesting debate for sure. However, it belongs on esolang, not here. Until we have a reliable source showing that it is TC, we shouldn't have it in the article. UncleDouggie (talk) 14:47, 15 September 2009 (UTC)

OISC Computational Power
I've restored the page to a previous edit containing the following sentence, which is essentially from Computer Architecture: A Minimalist Perspective, specifically its discussion of "the universe of OISC instructions", pp. 49-50:


 * "The computational power of an OISC depends upon the single instruction it uses and upon the level of abstraction assumed (e.g., finite or infinite resources), ranging from essentially no power at all (e.g., a NOP instruction, representing nullity), up to that of a universal computer (e.g., a subleq or similar instruction, assuming infinite resources)."

That differs substantially from the alternative version, which was as follows:


 * "Since OISC is a subclass of RISC, which is in turn a subclass of microprocessors, it is required to be of Linear bounded automaton computational class. Some of OISC languages are easily extended in abstraction to be Turing-complete."

--r.e.s. (talk) 00:24, 14 September 2009 (UTC).


 * The first statement above is not quite right for the following reasons:
 * 1. "NOP" instruction cannot produce a computer, but OISC is a computer, regardless real or imaginary.
 * 2. Originally OISC was called Ultimate Reduced Instruction Set Computer. It means that OISC is a specific type (subset) of RISC. For RISC devices the process of reducing the number of instruction does not invalidate its computational power. Otherwise one would be able to call RISC any processor with random half of the instructions being disabled, which is a nonsense.
 * 3. If the proposed definition is allowed, then virtually any device will fall under OISC definition. For example, my calculator with all the buttons covered with sticky tape except '1' would be an OISC, because it has one instruction printing '1' on the display when button '1' is pressed - the instruction '1'. In case of such definition the article does not have to describe particular types of OISC because nothing special about them according to such definition. It would just have to claim that OISC is any device with one instruction.
 * --Mazonka (talk) 09:27, 14 September 2009 (UTC)


 * The textbook I cite above definitely includes NOP in what it calls "the continuum of OISC instructions", pointing out that indeed not all OISC instruction sets are Turing-complete. It then focusses on instruction sets that are complete, stating that "an instruction set constitutes the language that describes a computer's functionality." Whatever might be the original meaning of the term OISC (it does not appear in the URISC paper, but does appear in  the 2003 references cited in the article), it has evidently come to mean something much more general than your narrow interpretation of it. --r.e.s. (talk) 15:04, 14 September 2009 (UTC)


 * So? Is it supposed to be a counterargument? --Mazonka (talk) 15:31, 14 September 2009 (UTC)


 * Yes. It is supposed to explain how your interpretation differs from those in respected published sources. I didn't think it needed to be spelled out in more detail, but according to those sources ... #1 NOP does indeed produce an (essentially zero-power) OISC. #2 The term OISC did not originate in the URISC paper. #3 Because the universe of OISCs includes such a range of instruction sets, including NOP, it should not be surprising that many simple machines can be regarded as OISCs.--r.e.s. (talk) 17:48, 14 September 2009 (UTC)


 * I just reverted back to the original definition. I expect you will want to change it back again. If you do, please, do not forget to include NOP as a new OISC and especially this new recently discovered OISC - a calculator with a button '1'. The original URISC papers describe SBN and MOVE, which later become known as OISC. My first point was that NOP is not a computer. Do you call it a computer? I can see you call it an OISC, but I am not sure if you call OISC a computer. --Mazonka (talk) 22:39, 14 September 2009 (UTC)


 * No, you reverted it back to a specious version that you yourself recently inserted, with the unwarranted claim that computationally an OISC must be in at least the LBA class. Contrary to what you seem to think, a computer need not have some minimum level of computational power in order to be called by that name.  You clearly dislike this point of view, according to which there is a range of computational power among computers, the range being from essentially no computational power up to that of Turing-powerful machines.  But whether you like it or not, this is a textbook point of view regarding OISCs -- and it's the role of a Wikipedia article to reflect this fact.  So, yes, I will revert your recent change. --r.e.s. (talk) 02:52, 15 September 2009 (UTC)
 * The statement from Computer Architecture: A Minimalist Perspective is correct, at least in the absence of a specific definition for an OISC. However, it's really an empty statement. One could just as easily say: "The computational power of a computer depends upon the instruction set it uses and upon the level of abstraction assumed (e.g., finite or infinite resources), ranging from essentially no power at all (e.g., an instruction set with no memory references), up to that of a universal computer." Just because a statement is from a published source and is literally correct doesn't mean that it is useful in the context of this article, which should be focusing on the differences between an OISC and a MISC that readers are more familiar with.
 * On the TC issue, we need to be clear on whether we're discussing the language or an implementation. The thing that blows people's socks off, and is the reason I went hunting for this article in the first place, is the notion that a one instruction language can be TC. So let's focus on that. The fact that a given implementation isn't TC isn't something unique to an OISC! So why even deal with it? Several weeks ago this article said that to be an OISC, the computer had to be TC. That's probably a poor way to put it as we see in the current debate. However, I think that languages that aren't TC aren't very interesting examples, just as the article on RISC probably doesn't discuss non-TC instruction sets. The title of the article probably doesn't help. One Instruction Set Language would be more technically correct. However, the precedent has been set by CISC, RISC and MISC. No one talks about RISL, probably because it's not easy to pronounce.
 * Is the BitBitJump language TC? Not can it be made that way, is it TC today? It seems to me that we don't have a reliable source that says it is. In fact, we don't have a reliable source to reference for it at all. Therefore, I favor removing BitBitJump from the article for now. Once there is a reliable source showing that it is TC, we should absolutely put it back in. That's pretty easy to do since we will always have the full article history. UncleDouggie (talk) 07:51, 15 September 2009 (UTC)
 * BitBitJump is of Linear bounded automaton (LBA) computational class. Any real computer including RISC is of this class. This does not need to be published to be proven true, because it is quite obvious. You can ask any computer scientist. On the other hand, I do not remember seeing RSSB published. Will we remove it as well until it is published?
 * Now I am a bit lost. Do we require Turing completeness in the definition of OISC, or not? If no, then anything is an OISC. If yes, then OISC is not RISC, because RISC is of LBA class and non-TC. --Mazonka (talk) 12:08, 15 September 2009 (UTC)
 * My argument above was that we should only include OISC languages in this article that are TC. Anything else opens us up to including the NOP instruction, which is pretty pointless. I added the ref for RSSB, it's on page 48 of Computer Architecture: A Minimalist Perspective. I didn't dig enough yet to see if the book also states that the language is TC. UncleDouggie (talk) 12:57, 15 September 2009 (UTC)


 * (To Mazonka) You are confusing abstract machines with real ones. The article is about OISCs as abstract machines, and their associated languages. (As an abstract machine, an OISC is not to be confused with an implementation of it.) The BBJ OISC, as an abstract machine with its associated language, is not a universal machine, nor is it even in LBA-class.  This has been explained to you in detail by myself and others at the esolang site, and it sets BBJ quite apart from all the other OISCs discussed in the article (which are universal machines).--r.e.s. (talk) 17:23, 16 September 2009 (UTC)


 * (To UncleDouggie) I agree with practically every single point you've made (with the minor exception of the term "empty"). Just as the focus of every cited reference is one or more OISCs and OISC languages that are established as being Turing-complete in the technically strict sense, that should also be the focus of this article.  Perhaps it could simply incorporate wording to the effect that although an OISC or an OISC language need not be TC, ordinary usage refers to those that are. --r.e.s. (talk) 13:09, 15 September 2009 (UTC)
 * I don't think we need to carried away on making the point about non-TC languages. The article on RISC doesn't mention TC at all. We wouldn't need to here either, except for the fact that when people hear OISC, many of them suddenly doubt that it could be TC. Therefore, we need to make the point that it can be TC, and in fact we present here several languages that are TC. UncleDouggie (talk) 13:27, 15 September 2009 (UTC)
 * "Empty" was a poor choice of words. The statement is not unique to an OISC and leads us into the dreaded NOP discussion. We shouldn't go there, just as we shouldn't if the statement was made about RISC or CISC, which it easily could be. UncleDouggie (talk) 13:36, 15 September 2009 (UTC)
 * Since a major reference describes a universe of OISCs, some of which are not TC, I don't think a Wikipedia article should imply that all OISCs are TC. To me this is just a matter of honestly reflecting the published literature.  I would have no problem with making the same point that the textbook makes, namely that although many OISCs are not TC, the astonishing fact is that some are, and these are the ones worth talking about. --r.e.s. (talk) 14:00, 15 September 2009 (UTC)
 * I agree that we shouldn't imply that all OISCs are automatically TC, just that this article focuses on OISCs that are TC. Some RISC languages aren't TC, but no one talks about them. UncleDouggie (talk) 14:29, 15 September 2009 (UTC)



Correction: Not some RISC, all of them are not TC strictly speaking. They are of LBA computational power. This class was specifically designed to be appropriate description of computers. So if the OISC is defined as TC, then it is not a computer, because no conputer is TC. However many people, including experts in the field, call computers TC, which is correct in loose sense. I think we must either allow several definitions stating that there is no common agreement or seek an expert opinion. --Mazonka (talk) 10:57, 16 September 2009 (UTC)
 * What are you correcting? My comments all specifically referred to the RISC and OISC languages, not the computers. If we can't agree on what we disagree on (and I'm not sure there really is anything at all), no one else is going to be able to help us. UncleDouggie (talk) 11:25, 16 September 2009 (UTC)
 * Okay. There are two separate things The computer and The language. What I wanted to say (to correct) that not only some RISC languages aren't TC, but probably all of them, because in RISC area people try to make a computer rather than a theoretical TC machine and the language is tightly bound to the hardware. Our disagreement is in the definition of OISC and the article content (please correct me if not right):
 * 1. (r.e.s.'s) OISC instruction can be almost anything, but there are some interesting examples.
 * 2. (mine) OISC must be of LBA class, but there are some instruction (constituting a language) which are TC.
 * 3. (yours) the article focuses only on TC OISC languages.
 * --Mazonka (talk) 12:21, 16 September 2009 (UTC)


 * (To Mazonka) Your "correction" is seriously incorrect, confusing the abstract machine and its associated language (which is what the article is about) with something else. In the present context, your statement that "no computer is TC" is glaringly false, because some OISCs are TC -- they are abstract machines used to model real ones.  An OISC language could not be called TC ( as the interesting ones are) unless the associated abstract machine were a universal machine (which the interesting ones are).  The abstract model may be TC, even if its implementation is not.  Your interpretation of RISC totally disregards this aspect of the situation. --r.e.s. (talk) 16:17, 16 September 2009 (UTC)

Need source for RSSB
We don't seem to have a source for RSSB. It was previously attributed to the CAAMP book, but that was incorrect. Here is the only source material I could find:
 * First description
 * Follow-up #1
 * Follow-up #2
 * Esolang page

It seems that we may have to remove the section unless someone can find it in a real paper somewhere. Help! —UncleDouggie (talk) 20:27, 4 October 2010 (UTC)


 * I would hope for anything that can explain what was reverse. What is SSB? Also nice would be a Math formular of the like that descibes the Turing Machine, at the moment I'm not sure if the result is stored in the adress the pc points to or the instruction (though I tend to the latter). Half of the esolang-wiki is an outdated copy from here. I will have to look directly at the implementations:
 * 
 * 
 * 91.66.6.127 (talk) —Preceding undated comment added 19:14, 15 August 2016 (UTC)

Subleq derivative languages
The article includes the sentence:


 * For more information see Subleq derivative languages

I'm guessing this used to be a link. However, it says nothing useful now, since it points to nothing and even a Google search doesn't do much but point back to this article. Any idea to what this should refer?99.245.230.104 (talk) 08:26, 17 May 2014 (UTC)

The structure of the article has to be changed
Three different types of OISC are TTA, BMM, and Arithmetic. The body of the article "Instruction types" is broken into several different Arithmetic types examples and general TTA. The structure should be rather:
 * Arithmetic
 * Subleq
 * SBNZ
 * RSSB
 * and Others...
 * TTA
 * Example 1
 * Example 2
 * BMM
 * BitBitJump
 * Toga
 * ByteByteJump — Preceding unsigned comment added by 91.230.41.194 (talk) 12:26, 13 January 2015 (UTC)
 * ByteByteJump — Preceding unsigned comment added by 91.230.41.194 (talk) 12:26, 13 January 2015 (UTC)

Synthesized instructions
These intermix 2-argument and 3-argument subleq instructions, but only the 3-arg variant has been introduced above (a 2-arg variant is introduced under the name subleq2). This should be cleaned up.

One instruction set computer—ambiguous?
How to parse? At least one 'Multiple instruction-set computer' was manufactured and sold by IBM—the System 360 model 25—with modifiable microcode that could implement either the IBM 1401 instruction set or a subset of the System 360 instruction set. The microcode was swapped by loading the implementation desired from a card deck. At the time, IBM and Honeywell competed for follow-on business from the IBM 1401—IBM with the System 360 Model 25, Honeywell with the Honeywell 200 series (with a superset of the 1401 instruction set plus an extra bit per memory cell for access delimitation). So, despite its possible status as a term of art, 'Single-instruction set computer' works better for me. — Neonorange (talk) 10:27, 15 March 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 external links on One instruction set computer. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20090613042342/http://www.caamp.info/ to http://www.caamp.info/
 * Added archive https://web.archive.org/web/20090613042342/http://www.caamp.info/ to http://www.caamp.info/

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 17:47, 6 December 2017 (UTC)

RSSB description is inconsistent
The text description of RSSB and its pseudo-code don't match. The text description states that the result of the subtraction is placed both in the Accumulator and also in the operand location. The pseudo-code only places the result in the Accumulator.

I don't know which is correct so I can't fix it.198.91.146.145 (talk) 17:27, 25 April 2021 (UTC)

In the media section?
Should there be some kind of 'in the media' section that could mention things like SIC-1 (also available on steam? 2600:4040:25B5:BE00:0:0:0:707 (talk) 13:21, 2 August 2023 (UTC)