Tuesday 2 December 2014

(CSC165)_Summary

This is possibly the last post for CSC165. The best thing of this Slog is that it is actually the first blog I ever had and it is also the first time I write something to share with others on the internet. Although the topics I wrote could be not very interesting to people who does not know CSC165, it is still a great start and an different experience.

For me, from CSC165, I learned more than just logic, proof, big-Oh and so on, the tutorial, group assignments, challenging problems in the class, and also this Slog are equally, if not more, valuable to me. I, as a student who has already earned a degree before, really can take a position to say this course really is a well designed intro level course. In order to leave myself a even better memory of this course, I decide to go back to study and be well prepared for the finally exam XD.

Sunday 30 November 2014

(CSC165)_Nov30_Computability

Starting with introducing the idea about One-to-One and Onto in order to define  |X| = |Y|.  More specifically, Given two sets X and Y, if there exists a well defined function f : X ↦ Y that is 1-1 and
onto, then |X| = |Y|. (Note: the idea of 1-1 and onto is more or less the same as we have learned in Math courses)

Why we want to have this definition? Because it is very helpful for us to understand countability, which is just the "surface" of computability. When counting, we are basically using Natural Numbers. We say the set of Natural Numbers are countable, and then if a set |A| ≤ |N|, then it is countable.

A interesting fact is even if some sets are infinite (e.g. N, or set of odd numbers), they are still countable. So, with the definition, the set of real number should not be countable, because |R| > |N|. For now, I think the idea about computability is still very unclear and abstract, and I expect to learn more about it in future courses.

Honestly, as the last week of the course, my friends and I are not as focus as previously in the class. However, we definitely have learned a lot of new things in this course with the help of professors and TAs. I must thank all of them and also my friends and classmates for the time we spent together in this course, and I wish everyone and myself can get a good score in the final exam!

Sunday 23 November 2014

(CSC165)_Nov23_Halting Problem

This week, although we had only two lectures, the content is really much harder for me to digest. Suddenly, we went from basics to a much deeper topic on halting problem and compatibility. In terms of halting problem, it is accordingly  one of the first problems to be proved undecidable by Alan Turing in 1963. 

Basically,  the problem is to determine, if a program will eventually halt when run with given input. It is clear, some specific program can have a implementable halt; however, a general halt is intuitively very hard to get, and in fact it is proved to be impossible to get. This is the interesting part of this lecture.  I was expecting an proof like we did previous in this course, but the one we saw was quit supervising to me. 

def navel_gaze(f):
    while H(f, f):
        pass
    return 42

This is a 'simple' counterexample, because it will halts if and only if it doesn't halt, which is clearly a contradiction. 

Serving as a important background information of computability, halting problem gave me a different taste on logic and math. It is difficult to come up with a proof, but when reading the proof, the idea is not all that complex.



Saturday 15 November 2014

(CSC165)_Nov15_BigOh/Omega/Theta(Week 9-10 Topic)

For these TWO weeks, we learned three major topics in evaluating the efficiency of programs, and they are Big-Oh, Big-Omega, and Big-Theta.

Based on what we can do in counting steps, we may be able to establish a specific formula for the program, and very possibly we may just get an general idea about the program. For example, it is Cn^3+K, CK^n, or Clnn (where, n is variable, C and K are constants). In these general form, the three topics helped us the categorize these programs. More specifically, Big-Oh gives us an upper bound, Big-Omega, gives us a lower bound, and Big-Theta is a tight bound, which is like a upper and lower bound.

So, more precisely, for any function f, which takes input of natural number and gives output as positive real number, we define Big-Oh, Big-Omega and Big-Theta respectively as following:


So, the questions we will possibly encounter in these topics may include prove or disprove of a given function in Big-O, Big-Omega, or Big-Theta. The basic steps of proving it:
1) pick c, and b
2) assume a generic n 
3) assume antecedent
4) connect antecedent with conclusion
5) close the assumptions
Actually making the connection between antecedent and conclusion is the most hard part of the proof. However, in general, after determining the goal, we can work on two different directions (reducing the larger element to approaching the smaller one and the other way around). In fact after going through the examples in lecture and questions in tutorial, I find there is always a relatively big space for us to approach the conclusion. Thus, there normally won't be a single right answer for these questions.

On the other hand, we disprove the statement by following the proof structure and prove the negation of the statement. I personally feel disproving is harder than proving, since it is harder to "pick" that n in accordance with assumed generic B or c. So, I should definitely work more on these kind of exercise.

Monday 3 November 2014

(CSC165)_Nov 2_Counting_steps?

Since last week, we begin to learn counting steps. In addition to the surprise that steps are actually harder than I thought  to count, I was actually wondered why that is even relevant.

Counting steps of a simple and brief program actually can be very helpful, because we can get a exact mathematical expression to reveal the relationship between the total steps and the size of input. This can be somehow hard in math, but it turn out to be make me unintentionally begin to realize the importance of efficiency of the algorithm of a program.  Of course, at this moment, in my first year CSC programming course, I may not use a "large" amount of information or data, but in real world the potential amount of data could be huge.

Many online information about this topic show us the length of time increased as a result of different function of the 'steps'. However, the one below very impressive and it is another way to present the issue related to efficiency.

(N1 to N6 represent the size of largest problem can be solved in one hour)

With present computer With computer 100 times faster With computer 1000 times faster
n N1 100N1 1000N1
n^2 N2 10N2 31.6N2
n^3 N3 4.64N3 10N3
n^5 N4 2.5N4 3.98N4
2^n N5 N5 + 6.64 N5 + 9.97
3^n N6 N6 + 4.19 N6 + 6.29
(Reference: M. Garey and D. S. Johnson, 1979, Computers and Intractability: A Guide to the Theory of NP-Completeness)

We can see from the table, for programs with low efficiency, within a limited time, the problem-sovling ability won't really increase noticeably even the computer is 1000 times faster. Therefore,  the technology developing, a focus on efficiency algorithm should necessarily be highlighted, because inefficiency well significantly diminish the result from the faster hardware.




Friday 24 October 2014

(CSC165)_Oct 24_More About Proof

After we learned the general structure of proof, professor began to introduce some example of actually proving some statement to help us build the logical connection. Actually, when writing proof, it is a lot similar to what we learned about 1) problem solving and 2) logic.

Sometime we need to decide if we want to prove the statement or disprove it. To prove it, we can use direct prove or indirect prove. Direct prove are more like doing logical analysis step by step after assuming the antecedent and finally to arrive at the conclusion. Indirect prove is very useful when its hard to find the connection from a direct way, and, in this case, we can prove the original statement by proving its contrapositive. Since, we know a statement's truth value is the same as its contrapositive. I think after more practice, the idea thinking pattern of trying to prove a statement is to think through both side, and being able to figure out which is a more efficient way to write a proof.

In terms of disprove, some of us have questions about how we could know the statement was wrong in the first place. Actually, most of the questions we have at this moment is very easy to find out if it is wrong by simply plugging some examples. However, when facing with a more complex statement, the professor told us, which I think is quite interesting but probably very important, that when we tried to prove a statement, and stuck somewhere during the process, we may think backwards and also think about disproving it. Basically, we can prove the original statement's converse to disprove the statement. Alternatively, if we want to do the disprove directly, we can also assume the antecedent, and after that if we can logically find a contradiction, we also achieve our goal.

Monday 20 October 2014

(CSC165)_Oct 17_Proof

During the past two weeks, in addition to mid-term and assignment, we begin to learn writing proofs using the logic related knowledge we had. It was first actually very impressive that the professor mentioned that he focus more on the structure of the proof rather than the specific technique used in the the proof. It actually make a lot sense since CSC165 is more about logical thinking rather math, and it is particularly important because we can apply the knowledge we potential will learn in the future to it.

Basically, for a direct proof,  the structure is generally:

Assume quantifier (domain) ....
    Assume .....
        Let ___ be ___
        Then .....
        .
        .
        .
        Then ....
    Then ....
Then....

(Note: the indentation is also a important part of the structure.)

In addition to a direct proof, we have choices proof the contrapositive, and disproof the negation.