Talk:Self-modifying code: Difference between revisions
→extremely fast operating systems and applications?: new section |
|||
Line 78: | Line 78: | ||
::That's almost correct. JIT compiles when's needed (that for example may mean interpreting a few lines that are never executed any more, like the main method, saving time for useless compilation), JIT may recompile with eager optimizations (escape analysis, inlining, etc). It simply compiles, it doesn't modify itself ever. It can change the compiled code on-the-fly but still that's not self-modification at any rate. I see self-modification only when a program changes the initial code that has been loaded from an external media (network can be considered so) and 'already' run (so decompression doesn't fit). [[User:Bestsss|Bestsss]] ([[User talk:Bestsss|talk]]) 09:53, 21 December 2008 (UTC) |
::That's almost correct. JIT compiles when's needed (that for example may mean interpreting a few lines that are never executed any more, like the main method, saving time for useless compilation), JIT may recompile with eager optimizations (escape analysis, inlining, etc). It simply compiles, it doesn't modify itself ever. It can change the compiled code on-the-fly but still that's not self-modification at any rate. I see self-modification only when a program changes the initial code that has been loaded from an external media (network can be considered so) and 'already' run (so decompression doesn't fit). [[User:Bestsss|Bestsss]] ([[User talk:Bestsss|talk]]) 09:53, 21 December 2008 (UTC) |
||
== extremely fast operating systems and applications? == |
|||
Under the heading "Henry Massalin's Synthesis kernel" it is claimed that |
|||
<blockquote> |
|||
Such a language and compiler [based on Massalin's techniques] could allow development of extremely fast operating systems and applications. |
|||
</blockquote> |
|||
This sounds like pure speculation to me. |
Revision as of 20:10, 28 March 2009
TODO
- an example and discussion of 'high-level' self-modifying code such as in LISP.
- examples and discussion of traditional uses of self-modifying code, such as in graphic Blitting units, specialisation of algorithms (like sort with embedding a cmp), and in interpreter kernels.
Is a thunk and/or a trampoline (computers) also a kind of self-modifying code? --DavidCary 03:01, 18 August 2005 (UTC)
have never written any self modifying code, but example of state-dependent loop doesn't look quite right? Maybe a misplaced curly-bracket? --(AG)
- I'll check the brackets, actually state depedant loops are a sort of self-modifying code I've coded a few times on 8bit machines, when state transition is not freq, esp if altering just the arg of an opcode, thus using a faster instruction (eg, on the 6502). Code-generation is 'still' relevant and useful, eg 'compiled bitmaps' during the 90's, and specific rendering code today.Oyd11 00:42, 13 June 2006 (UTC)
I suggest removing the entire Synthesis section, along with Massalin, Haeberli, and Karsh, on notability grounds. Marc W. Abel 15:12, 26 April 2006 (UTC)
- Futurist Programming should probably be the new link, although the original author of this document should probably check as they would know whether this is the correct article.
Javascript example: really self-modifying?
It seems to me that the Javascript code example is not self-modifying. The action variable is merely a function pointer that points to two anonymous functions in the course of its life. All the code from this example could be put in read-only memory and it would execute without problems. Where am I wrong? Sarrazip 02:17, 19 December 2006 (UTC)
I agree with Sarrazip. In addition, I think the placement of the Javascript example does not go under the section of "Interaction of cache and self-modifying code".
I have removed the Javascript code example, since no one has objected for several months. Sarrazip 03:05, 21 May 2007 (UTC)
Obj-C
Possibly Obj-C code in addition to LISP? It's the only object oriented superset of ANSI C that I know of that really implements it as a base feature. [1] --Electrostatic1 08:51, 15 May 2007 (UTC)
Self-modifying code in self-referential machine learning systems
I think there should be a section on self-modifying code for machine learning along the lines of Jürgen Schmidhuber's work on meta-learning: http://www.idsia.ch/~juergen/metalearner.html Algorithms 20:54, 4 June 2007 (UTC)
Dead link
- "Synthesis: An Efficient Implementation of Fundamental Operating System Services" -Henry Massalin's PhD thesis on the Synthesis kernel
This appears to be a dead link. Which makes me sad. I was really looking forward to seeing a self-modifying kernel! Guess it's time to whip out Google.
66.93.224.21 11:24, 5 June 2007 (UTC)
- I replaced it with something found on google.
Reentrant self-modifying code is possible
Many years ago, I had to write a piece of code which was reentrant - because of a number of constraints imposed by my interrupt handling methods - but also needed to be self-modifying. To explain: an input was N, the number of a record in a file, and the assembler supplied only one type of TRAP instruction - with a constant. An earlier generation of the application used a TRAP instruction (it was on a PDP-11) thus:
:READ_N: ; R5 points to (unsigned) record number N (assumed <=255 and non-zero) : MOVB 2(R5),10$ ; Modify the TRAP instruction :10$: TRAP 0 ; Read "something" : RETURN
and I needed to retain the mechanism, but make also to make it reentrant.
The solution was simple: have a "virgin copy" of the code available (but never called directly). When it was needed, it was copied it to the top of the stack, together with "cleanup code", where the copy was modified, and executed; finally, the cleanup wiped the stack of the defiled code. All I can say is that it worked.
My simple statement about self-modifying code is this: in bootstrap code, it's fine - but elsewhere: DON'T EVEN THINK ABOUT DOING IT! (Especially where reentrancy is a prerequisite ...) Hair Commodore 18:57, 16 September 2007 (UTC)
- I've corrected the above code: the error was in the byte addressed - the low byte of a TRAP instruction was to be altered, not the high byte. (It's a long time since I've used a PDP-11 at assembler level - sorry!) Hair Commodore 20:16, 22 September 2007 (UTC)
- Awww, go on, it's not all that bad. What is required is a calm attitude and appreciation of the actual environment. By using the stack working area, you ensure the avoidance of clashes in a quite proper way. This is what multi-stack designs are all about, and by writing in assembler (with proper commentary) you need not be constrained by the shibboleths of the prating orthodoxists of flabbier computer languages that constrain themselves and declare it good. In other words, I have misbehaved also, and declare it good. NickyMcLean (talk) 19:51, 18 December 2008 (UTC)
JIT?
Maybe I'm being nit-picky, but I don't think a just-in-time compiler falls into the category of self-modifying code, any more than any other compiler would. It generates some code, and then transfers control to it. It doesn't really alter its own behavior. And in the same vein, I don't think that uncompressing some otherwise static code and then running it qualifies as self-modifying, either. I would reserve the term for code that modifies its own behavior at it is running. Maybe it's a rather vague concept, though. Deepmath (talk) 11:02, 15 July 2008 (UTC)
- I utterly agree with the above statement: JIT is not self-modifying. The code itself is only being generated instead of self-modified. The compiler itself or the virtual machine never gets modified. Un(de)compressing doesn't yield any self-modification. It'd be the same to say that loading dynamic libraries (or any libs for that matter) is self-modification. Running any code by an OS, thus, can be viewed as self-modification.
Bestsss (talk) 12:43, 18 December 2008 (UTC)
- If I understand correctly, "Just-in-Time" compilation is equivalent to compiling the whole lot once at the start in that the resulting executed code in the part that is being executed would be the same. The advantage is presumably that no compiler effort is wasted on execution paths that will not be taken on the particular invocation, and that the compiled code will run faster than the interpretation of the text especially if there are loops. By contrast, consider a prog. whose purpose is to assess the workings of some routines for numerical integration such as Simpson's rule, etc. One requirement would be a variety of functions to be integrated and they might be incorporated via a tiresome "case" statement or similar. Otherwise, The test prog. could read from an input file the arithmetic statement defining the function, encase that text in suitable text for the definition of a function f(x) in the language of choice, pass the whole to the compiler and link to itself this new function that can then be invoked by the testing procedures as if it had been a part of the whole compilation all along at full compiled speed; no messy "case" statement and selection of function one, then function two, etc. The difference here is that arbitrary different code would be produced, depending on the list of arbitrary test functions supplied to a particular run. NickyMcLean (talk) 19:37, 18 December 2008 (UTC)
- That's almost correct. JIT compiles when's needed (that for example may mean interpreting a few lines that are never executed any more, like the main method, saving time for useless compilation), JIT may recompile with eager optimizations (escape analysis, inlining, etc). It simply compiles, it doesn't modify itself ever. It can change the compiled code on-the-fly but still that's not self-modification at any rate. I see self-modification only when a program changes the initial code that has been loaded from an external media (network can be considered so) and 'already' run (so decompression doesn't fit). Bestsss (talk) 09:53, 21 December 2008 (UTC)
extremely fast operating systems and applications?
Under the heading "Henry Massalin's Synthesis kernel" it is claimed that
Such a language and compiler [based on Massalin's techniques] could allow development of extremely fast operating systems and applications.
This sounds like pure speculation to me.