check
Minutes from past meetings | לימוד בחברותא

Minutes from past meetings

Meeting one:

We started off with the basics of lisp. The most interesting discussion was “normal order” versus “applicative order” and why this matters. While the two should be equivalent they're not exactly. We talked about why they're not computationally equivalent and why they should be mathematically equivalent and how we're going to get around that problem in future. An interesting idea that came up was computing the two and see what happens: do we get the same answer? The thing about this is that it depends on the compiler. A more interesting comparison would be energetically: let's see how much energy is expended in the calculation and we can directly relate that to the entropy using energy/entropy/information equivalence and see what we get, which would really indicate what's going on.


Meeting two:

This time we reached the beginning of the structure and interpretation of programs, were it start with relative versus recursive structure. Interesting point here is that this is a product of the calculation that is a done rather than the form of the code. Two primarily interesting points emerge one is how do we define the runtime of program because in Lambda calculus it's just a function: it has an input and gives an output. This is "instantaneous", so what is the mathematical meaning of runtime?

I propose that run time is the the maximum length of the function as it's expanded during a calculation. The question remains: is that normal order or applicative order? we don't yet know, as we haven't yet reached the bottom of what is the difference between the two is.

Second point: what is the meaning of space complexity here? I propose the what every time we get a reclusive call if we have the same variable again and it's not actually the same variable it needs to be stored in a "different place in memory" so to speak. Really we should call it say not factorial(n) but factorial(n') it which is n-1, so every time we have a new variable it takes up more space and so the space complexity is the number of unique names we have in our program 

Additional points that arose:

  • Hardness versus randomness
  • The role of Randomness in decision-making and in Consciousness of humans and animals

Meeting three:

What should we call procedures that operate on high-order procedures? Higher-high-order procedures? What about the bit where Lambda calculus defines numbers and constants as (nested) functions? Should any function receiving or returning a number already be considered “high-order”?


 

Meeting four:

Lisp can (but doesn’t for reasons of efficiency) define data as functions as well, e.g. the definition of pairs given in section 2.1.3. This is in line with the tenet of Lambda calculus, that everything is a function. Also leads to the following interaction:

Roy: What do you mean, everything is a function? Surely not everything?

Yoni:

Other notes:

  • It’s not obvious that the readers know what resistors (or Ohms) are, and in fact section 1.3 already uses integrals assuming everyone knows what they are and what approximations exists for computing them. The book is clearly intended for students with a wide background.
  •  To use the rational numbers we must use their special operators – this prevents generic use, and therefore breaks the abstraction barrier. To keep the abstraction in place, we need generic operators.
  • In Lisp, the underlying data representation is always a list (LISt Processing!), which is very inefficient for many applications. However, Lisp was mostly used for graphs in AI, where lists are an excellent choice. It is also possible that Lisp has other (built in) data structures we haven’t met yet.

Meeting Five:

The material was the basics of data structures - there wasn't much we had to say about it other than joking about "c(a|d)*r". 

We did discuss how to write "map" so that it can accept a number of lists (as described in footnote 12) - we were able to do it using a number of recursive calls, essentially using "map" to get the first element of each of the lists, running the function on them, and then moving on. The termination condition would be using "(and (map empty? ...) ...)" - 4 uses of "map" in all ;D. 

Another topic was interpreters for Lisp and Scheme - as it turns out, there are many. They vary in time taken to produce code and in the efficiency of the code produced, as well as support for different dialects of the language. 

"The Laundry Files" by Charles Stross came up when discussing "what if magic worked like programming", see: https://9gag.com/gag/aGZZXL0. 


Meeting Six: 

 

We started this discussion with derivatives, how they're defined mathematically, and how you can define fractional derivatives and even complex order derivatives (using Fourier Transform).

The next thing we discussed was trees and how we might have a pointer to a father in the tree. The answer to this really is that in scheme is the objects we’re discussing are all pointers and not objects themselves and so we can have a pointer upwards in the tree and this is fine. This does however mean the interpretation to be a lot more complicated and the distinction between objects and pointers has to be done somewhere and the interpreter has to keep track of where the actual objects are and if they’re still in use (garbage collector). I believe we will see this later on in the book.

The biggest question and the one we didn't really get very deep into (mostly because it's so big) is the quotation and what symbols are.

The best example we've come to in this session is variables in logic. These are different than variables in programming languages and they're also not just placeholders: they are symbols in their own righ,t but not entirely the same as anything else.

Another question: When we compare two objects, do we want to compare if they're the same symbol or if they evaluate to the same value or some combination? The answer to that is of course yes! And the question becomes: how do we choose which of those different options we want at each point?

Finally, we reach the question of a language that can interpret itself and where expressions in the language are in fact also part of the language itself. Roy suggested that for this reason any executable might have to include the Interpreter itself in order to interpret input or even just the code.

We also briefly discussed Quaternions.


Minutes from meeting seven:

For variety, we started with a discussion of how you could understand climate change in terms of entropy. For starters, we assume that the energy coming in and leaving the earth is almost exactly identical (otherwise the earth would warm or cool very fast, and there’d be no debate). However, the arriving energy is in a sense “coherent”, and can be harnessed for work, while the energy leaving is entirely thermally distributed (and so can’t). While this view neglects many aspects (greenhouse gasses, toxins, etc’), it’s an interesting exercise in abstract thermodynamics.

As to generic programming, the two reasonable suggestions are:

  1. A dispatch table, which works pretty well but requires “magic” get and put functions. This is a “down and dirty” solution, but is reasonably efficient and easy enough to use and reason about.
  2. A pure functional solution. This is peerless elegant (to quote Charles Stross), but very inefficient and hard to reason about. Still, it’s in keeping with “everything can be expressed in lambda calculus”, an important tenant of this exercise.

We also briefly discussed part 2.5 of the book: Yes, we can build a type graph, yes, we can theoretically provide conversion operators (or ‘coercion’, as the book calls them), but in any realistic system we’d want this to be done (at least mostly) by the compiler, so proving that it’s doable is more important that providing a sufficiently efficient algorithm for it.

We are also excited about moving on to chapter 3, where things will (we hope) get even more interesting.


Meeting eight:

There were some mixed feelings entering the meeting, as the introduction of “set!” (and with it variables) seems to disrupt and even destroy the elegant structure we’ve built and maintained so far. However, I can say that we will rebuild even “set!” using functional tools, so it will work out.

After discussing how “set!” forces us to finally abandon “normal order” evaluation by creating difficulty replacing expressions with equals, and that applicative order doesn’t actually work either. We will have to wait and see how evaluation works now that we have this tool.

Places where we disagree with the book: For starters, the concept of equality (between expressions) has been difficult to pin down since we introduced quoting. The addition of “set!” mostly means that a temporal component is added (so far), which is not as much of a problem (as we see it). Additionally, we’re not sure that learning “imperative programming” first is a bad move. This as opposed to starting with un-typed and garbage collecting languages, which we know hinder the understanding of those concepts.

Obfuscation of code was discussed at length, with Udi bringing examples of a compiler designed to make a particular de-compiler draw pictures when attempting to decipher the code, and other sneaky virus techniques. This brought us about to the usual discussion of C++’s syntax, which (some would claim) need no farther obfuscation. Some links to talks about the single instruction obfiscator and anti-reverse engineering can be found in the links section. 

As a final note, we mentioned (again) the difficulty with random VS pseudo-random numbers, and we should consider inviting someone to give a talk about hardness VS randomness (maybe Noam Nissan himself?).


Meeting nine:

  • We discussed the difference in meaning of the term "functional programming" between SICP's writing and present time. Specifically, in the past it referred more to passing functions as data ("functions are first class"), while nowadays the focus is more on immutability. This lead us to a discussion on the evolution of spoken language as well, and on people misunderstanding old scripture by trying to read it using their knowledge of modern language. Similarly, we talked about the transformations lone words go through as they move from language to language. It also made us really feel the lack of remaining linguists in our group.
  • After a brief mourning of the beautiful mathematical model that was lost to the addition of assignment, we went over exercises 3.18 and 3.19. This pair of exercises not only showcases the book's tone ("a very clever idea"), but also presents in a non-contrived way how immutable code can still more elegantly solve problems that seem to require assignment; during the chapter on assignment, no less.
  • The front of an accordion bus is the car, the back is the cdr.
  • Wiki of SICP exercise solutions: http://community.schemewiki.org/?sicp-solutions
  • There were also discussions on the selection of philosopher names in the rust manual, the precise nature of ki blasts and IT

 

Meeting Ten (21.03.19) – Guest lecture by Noam Nissan:


Noam Nissan told us of his PHD work (many years ago), about Hardness VS Randomness.

Noam told us about two approaches to the issue:

  1. A computational construct, where we can make a randomized algorithm complete deterministic. The construction is as follows: assume we have a “hard” problem, that takes some input of size log(n) and outputs a string of length n such that no efficient algorithm can guess the next output with probability better than e, for e that is exponentially small. Now we use the algorithm for the hard problem on every one of the (logarithmically few) inputs, and run the randomized algorithm with all those outputs as its random bits.Since the outputs we are using for random bits are indistinguishable from true random bits (up to some exponentially small e), the randomized algorithm must give the same result as it would with “true” random bits.
  2. The computational method does not, however, give any bounds on the run-time of the “hard” algorithm, so it might not be useful. The cryptographic approach assumes a “one-way function” as the “hard” problem, and thus can give cryptographic security for a number of applications, from wifi security to email encryption.

Meeting Eleven (10.04.19):


The material for this meeting was parts 3.4 and 3.5, which proved to each deserve their own individual meeting. We might return to the end of 3.5 at the next meeting.

We found several of the footnotes interesting and noteworthy, starting with the discussion of a “completely fair arbiter” to act as a scheduler for a concurrent system. As expressed succinctly by Jean Buridan, “A perfectly rational dog placed between to equally attractive sources of food will starve to death”. Also, to quote some graffiti seen on a Cambridge building wall: "Time is a device that was invented to keep everything from happening at once.''

We also see that synchronization requires magic “mutexes” or “atomic actions” to work – and the only place to get those is hardware. When elegant theory meets the real world, some of its beauty is destroyed.

To conclude our discussion of concurrency, play this game: https://deadlockempire.github.io/  

Next we turn to streams, with turn out to be another way to introduce time to our system “without” using assignment. While the idea looks charming at first, we quickly find out that it needs assignment (in the form of “set!”) to work efficiently, still requires a scheduler for concurrency, and though it appear elegant might only be sweeping more problems deeper under the rug.

An interesting point about using streams is that they require distinction between “regular” arguments and “delayed” arguments, which are actually the front end of streams. This could be solved by making all arguments “delayed”, leading us right back to normal-order execution which we thought was less good back in chapter 1.

Finally, a note about prime numbers: the Chebyshev theorem that for all prime numbers p_(n+1) <= p_n^2, definitely holds with strong inequality, for if p_(n+1) were equal to p_n^2, it would not be prime!!

 

Meeting Twelve (01.05.19): 


The most significant element of this meeting was the Y combinator. This construct of pure lambda calculus beauty is the foundation of recursion and the example from the part 4.1 is brought here: 

((lambda (n)
   ((lambda (fact)
      (fact fact n))
    (lambda (ft k)
      (if (= k 1)
          1
          (* k (ft ft (- k 1)))))))
 10)

Several of us were able comprehend the Y combinator, and Roy gave a very useful interpretation (though it is entirely alien to lambda calculus): In order to recur, a function must be able to call itself. However, the function not yet being defined, it can not be passed to itself as an argument. 

This difficulty is overcome by the Y combinator: it recieves a function, and uses the function it recieved with itself as the first parameter. The recursive function must now only call it's first parameter (with the self-same parameter ini turn) in order to recur. 

Other points of note that came up: 

  1. Aparently there exists a de-movfuscator (link can be found in the links section). 
  2. As everyone well knows, a computing system is mature when it can run DOOM. The links section now has videos showing that keys with programmable characters (the newest fad in keyboards) are mature.....

 

Meeting Thirteen (15.05.19): 


Prompted by hy (http://docs.hylang.org/en/stable/), we discussed how lisp is pretty much writing the AST directly. As opposed to other languages, that have to transform complex syntaxes into trees, lisp looks pretty much like the data structure it parses into. This makes metaprogramming feel a lot more natural - you're dealing with data structures rather than text manipulation, but they're still the data structures you think in when you code.

Next we talked about how the nondeterminism presented in chapter 4.3 is pretty deterministic in nature. In the end, it is a systematic, ordered scan, which can find itself in infinite loops that wouldn't be an issue to a true nondeterministic machine. 

As for lazy evaluation, we noted that it can often occur (particularly notable in haskell) that the overhead of creating thunks is worse than just doing the calculation itself directly, not to mention the larger memory footprint.

Finally it was rather interesting to see the word automagically make an appearance, not to mention given a source citation.

 

Meeting Fourteen (29.05.19)


The section for this meeting was about logic programming. This is a topic that was hevily researched in AI, though interterestingly, not in the mothods preseted in this part. This is strange because Lisp was also a promenant AI tool. 

Another interesting observation is that the basic evaluator can be seen as a "forward" process, while the "amb" non-deterministic evaluator can in a sense be seen as a "backwards" evaluator. In logic programming, we have an "in all directions" evaluator, that returns many answers (all database entries satisfying the query) - so even the user can no longer pretend that a single straightforward (even non-deterministic) process is happening. 

A final important point is that the "not" function in this section is not the logical "not" (try saying that three times fast). It should be understoon as "this can't be proven", rather than "prove the negation of this statement". 

 

Meeting Fifteen (12.06.19)


We had a review meeting, to allow everyone to catch up and for people who missed some meeting to rejoin. 

We discussed the points we found interesting in past meeting, and found that some memebers have new insights about them: 

 

  1. We couse argue that things are equal if the behave equally from a functional point of view. While this is in no way a perscriptive definition (and might be exponentially diffucult or even undicidable to implement) it holds mathematically. If we decide we need a method that is computationally feasable, we can decide if we're comparing bits or following pointers, and this once again gives a closed definition. 
  2. Quoting: The meaning of quoting can be understood as "use this element syntatically" (rather than semanticall). This would mean, e.g. comparing the name rather than the value of the variable, which is what we want to happen. 
  3. An interesting point about Hy: commenting blocks of code is automatic - when the opening bracket is commented, the entire block up to the closing bracket becomes a comment. The interesting point is that the comment can either return "None" or return nothing, and how we might define such a thing in Scheme: commenting is a function that recieves a block of code, and either returns "None" (this is the option where the commented code returns nothing!) or returns a function that returns "None" (which is very cute). 
  4. Udi was at PyCon last week and went to a workshop about Lisp. A comment made there was "what is the distinction between code = a text file in some PL and 'data'?". Now that we've written a Scheme evaluator it's clear to us that this is not clear cut or even necessarily well defined. 

 

Meeting Sixteen (26.06.19)


The material for this meeting was part 5.1, discussing the beginning of building a register machine in Lisp. The initial construct is a data flow graph and a control graph (both flow-charts), from which in turn we can create lists of regiters, operations on registers, and finally a sequence of action to perform equivalent to the control graph. 

This ends up looking exactly like assembly written in Lisp, which is not new to any of us. The two comments we have are:

1) as mentioned in the book, the commands "looking like Lisp" is a disadvantage. This only becomes an obvoius mark against the format here, because in the other chapters where sub-lanuages were developed (e.g. logic programming) we did not have a preconcieved notion of what the language "should look like", so the disadvantage of "looking like Lisp" was less glaring. 

2) Having separate compare and jump commands follow the 80_86x architecture, and is otherwise not well motivated. This is of course not a significant philosophical or methodological issue. 

 

Metting Seventeen (10.07.19)


We covered two topics, matching parts 5.2 and 5.3. 

Part 5.3 deals with the assembler itself, mostly using tricks we have already seen: Object oriented programming in a functional language using message passing, and the creation of "thunks" in the textual analysis phase so that the evaluation phase is faster. There is one gem that deserves special mention: 

The funtion "update-insts!" needs to return two lists, that will be used as parameters for another function call. The usual method of doing this would be to create a new structure to hold them, of (in Lisp) just 'cons' them together and have the receiving end take them apart again. However here a more inventive method is used: the function to be called on the lists is passed to "update-insts!" as a parameter, and "update-insts!" calls it on the (recusively built) lists it must eventually process. 

This is a neat context switch from a progamming viewpoint: instead of returning the parameters to a function, we send the function to the parameters!

Part 5.2 deals with memory management, and specifically automatic "garbage collection" mechanisms allowing the programmer to ignore memory considerations entierly. This process alsways looked like magic from the outside (to the members who were present, at least), and so we were delighted to find out that at least the "stop and copy" mechanism is quite simple. 

 

Meeting Eightteen (24.07.19)


Our penultimate meeting, discussing the explicit control evaluator. At first glance, this appears to be a translation of the meta-circular evaluator in to assembly (or, the assembly defined in Scheme). However upon inspection, some very interesting and important issues appear. These are all to do with the fact that the meta-circular evaluator relied on the underlying Scheme implementation, so many things (like evaluation order) that could be left as "implementation details" there must be implemented here. 

Two points that we explicitly disscussed: 

1) the tail recursion optimization appears to be a minor improvement - "we're just saving two or three stack operations", but actually makes a world of difference. Contrast this with the other optimization presented in the same breath - creating the argument list after processing the first argument (instead of before), also saves two stack operations. 

Roy pointed out that the difference is not in the stack - it's the frame. While the first argument already needs the new frame (environment), so creating it is inevitable, the last statement call does not. So it superficially appears that we're only saving a few stack operations, when in reality we can do away with the whole new frame by overwriting the old one. 

2) Error reporting. Adding full error reporting to the system is as much work as writing a full debugger, because in fact that's exactly what it is. We also discussed the three approaches to error reporting: the C approach of a global error code variable (which even C does not like), the C++ method of exceptions, or the "return a special code" method, where the returned value can be either the expected one or an error signalling value (optionally with, say, a pointer to more information). 

Also, we uncovered a serious flaw in the C++ compiler - it allows you to write in C++.