# Single Instruction Computer

There is a computer with a single instruction:

sbn a b c

where a,b,c are addresses. Constants are also not allowed in this instruction – only addresses. This instruction subtracts b from a and stores the result in a. If the result is less than zero, then it branches to address c, else it continues with the next instruction.

Also, if no c is given in the instruction, then it defaults to the next instruction. Assume that there is a location in the memory called one that has the number 1 stored in it.
Initialize a variable to zero

sbn temp temp

sbn temp temp

sbn temp b

sbn a temp

Left shift a number a(is equivalent to multiplying the number by 2)

sbn temp temp

sbn temp a

sbn a temp

Implement an unconditional jump

sbn temp temp

sbn temp one <label>

Implement an assignment operation i.e. assign b to a

sbn a a

sbn temp temp

sbn temp b

sbn a temp

Multiply a with b and store the result in c:

sbn temp temp

sbn c c

sbn temp b

sbn a one out

label: sbn c temp

sbn temp2 temp2

sbn temp2 one label

out: <Done>

Mark a number Negative:

sbn temp1 temp1

sbn temp2 temp2

sbn temp1 num

sbn temp2 temp1

sbn num num

sbn num temp2

Multiply when both a and b can be negative numbers:

This can be done by first multiplying the two numbers with the above procedure and then applying the sign later by checking the sign of both the numbers.

sbn pos-a pos-a

sbn pos-b pos-b

sbn is-A-Zero is-A-Zero

sbn is-B-Zero is-B-Zero

sbn pos-a a assign-A-Done

sbn temp temp

sbn temp one

sbn is-A-Zero temp

sbn temp temp

sbn a a

sbn temp pos-a

sbn a temp

assign-A-Done: sbn pos-b b assign-B-Done

sbn temp temp

sbn temp one

sbn is-B-Zero temp

sbn temp temp

sbn b b

sbn temp pos-b

sbn b temp

assign-B-Done:

<Follow the same as multiply procedure>

sbn temp temp

sbn is-A-Zero one a-Positive

<Follow the make a number negative procedure to reverse the sign of c>

a-Positive: sbn is-B-Zero one b-Positive

<Follow the make a number negative procedure to reverse the sign of c>

b-Positive: <Done>

Implement sign checking: load an address b with true if it is negative, false otherwise

sbn temp temp

sbn b b

sbn temp a <a-is-not-less-than-zero>

sbn temp temp

sbn temp one

sbn b temp

a-is-not-less-than-zero: <Done>

Implement compare-to-zero:

sbn temp temp

sbn temp a <a-is-positive>

sbn b b

sbn temp temp

sbn temp one <EOP>

a-is-positive: sbn temp temp

sbn temp a <a-is-non-zero>

sbn b b

sbn temp temp

sbn temp one

sbn b temp

sbn temp temp

sbn temp one <EOP>

a-is-nonzero: sbn b b

EOP: <Done>

Still to do: left rotate, right rotate, signed arithmetic

# Computational Thinking

Some aspects of computational thinking (from a paper having the same title):

• Computational thinking is reformulating a seemingly difficultproblem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation.
• Computational thinking is thinking recursively.
• It is parallel processing.
• It is interpreting code as data and data as code.
• It is type checking as the generalization of dimensional analysis.
• It is recognizing both the virtues and the dangers of aliasing, or giving someone or something more than one name.
• It is recognizing both the cost and power of indirect addressing and procedure call.
• It is judging a program not just for correctness and efficiency but for aesthetics, and a system’s design for simplicity and elegance.
• Computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system.
• It is separation of concerns.
• It is choosing an appropriate representation for a problem or modeling the relevant aspects of a problem to make it tractable.
• It is using invariants to describe a system succinctly and declaratively.
• It is having the confidence we can safely use, modify, and influence a large complex system without understanding its every detail.
• It is modularizing something in anticipation of multiple users or prefetching and caching in anticipation of future use
• Computational thinking is thinking in terms of prevention, protection, and recovery from worst-case scenarios through redundancy, damage containment and error correction.
• It is calling gridlock deadlock and contracts interfaces.
• It is learning to avoid race conditions when synchronizing meetings with one
another.
• Computational thinking is using heuristic reasoning to discover a solution.
• It is planning, learning and scheduling in the presence of uncertainty.
• It is search, search, and more search, resulting in a list of Web pages,
• a strategy for winning a game, or a counterexample.
• Computational thinking is using massive amounts of data to speed up computation.
• It is making trade-offs between time and space and between processing power and storage capacity.

This kind of thinking will be part of the skill set of not only other scientists but of everyone else.

# The Connection Between Design, Theory And Empirical Analysis

From Algorithms and Data Structures: The Science of Computing:

Each method of inquiry also interacts with the others. After designing a program, computer, or algorithm, the designer needs to test it to see if it behaves as expected; this testing is an example of empirical analysis. Empirical analysis involves experiments, which must be performed on concrete programs or computers; creating these things is an example of design. Designers of programs, computers, or algorithms must choose the design that best meets their needs; theory guides them in making this choice. Theoretical proofs and derivations often have structures almost identical to those of the algorithm they analyze—in other words, a product of design also guides theoretical analysis. Empirical analysis tests hypotheses about how a program or computer will behave; these hypotheses come from theoretical predictions. Theory inevitably requires simplifying assumptions about algorithms in order to make mathematical analysis tractable, yet it must avoid simplifications that make results unrealistic; empirical analysis shows which simplifications are realistic and which aren’t.

# 10 Reasons Why You Are Lazy

There is a post that a professor sent to his potential students on why they may not want to take his course. But, it has more deep insight – of inaction, of sluggishness, of carelessness.

10. You have no interest in either the theory of programming or the
programming of theory.

9. You never get into arguments with your friends about the merits of
C, C++, Java, C#, Visual Basic, Javascript, Perl, or Python.

8. You understand all the error messages your favorite compiler gives
you and you’re sure there’s no possible way to make those more

7. You’re not interested in helping people program more efficiently,
because you want to make sure your friends all have jobs for a long
time.

6. You’re not interested in how to design languages that students
could learn to with less trouble, because you want to make sure you
and your friends all keep their TA jobs.

5. Your research doesn’t have anything to do with a domain-specific
language that communicates directions to a computer, because you
don’t ever use SQL, XML, neural networks, bayesian networks, model
checkers, or work with any special-purpose data in a particular domain.

4. You know that there is only one right way to solve all programming
problems.

3. You think that there is no connection between type safety in
languages like Java and your safety when you browse to a web page
with a cool web applet.

2. You are sure that obejct-oriented programming methods will never be
superceded by some new-fangled paradigm (like aspect-oriented programming).

1. You know that you’ll be programming in FORTRAN, er… Pascal, no,
wait, …  C, no, ummm… C++, no, wait, definitely Java (well,
maybe C#), for the rest of your career.

# Constructing a Tree Given Ancestor Matrix

The ancestor matrix is defined as follows:

If there are n nodes in the tree, then it will be a nxn matrix. An element a[i][j] is defined as follows:

a[i][j] = 1, if node i is an ancestor of node j,

otherwise a[i][j] = 0

Given this matrix construct the tree.

My algorithm is as follows:

• Find the zero column in the matrix –  it is the root of the tree. Lets call it node k
• Remove row k and column k from the matrix. While there are more elements in the matrix, do:
• Find all columns that have a single 1 in them.
• For each such column, associate the row index as the parent to the column index i.e. if a[n][m] is the only 1 in the column, then n is the parent of m.
• If there are no such columns, then indicate the corresponding indices as the leaves of the tree and quit.
• Remove the corresponding rows and columns from the matrix.

What is the complexity of the algorithm – the steps – finding a zero column or finding a single 1 column each is of O(n^2) complexity. So the complexity is O(n^2). But, if the ancestor matrix is represented as a sparse matrix, then the complexity of the algorithm reduces drastically – need to figure that out.

The algorithm is very crude and needs to  be refined.

# Ubuntu Tips

This webpage will be frequently updated with ubuntu tips as and when I get to know them.

• Setting the number of workspaces: Type gconf-editor at the command prompt. A window opens.  Go to apps->compiz->general->screen0->options. On the right pane one can get set the hsize and vsize parameters to get hsize*vsize number of workspaces.
• Disabling tracker indexing service: Goto Menu->System->Preferences->Sessions. Uncheck the tracker indexing service and click close.

# What Does It Help To Become an Innovator

Bjarne Stroustrup on sys-con answers:

A solid technical education, a sense of what is practical, persistence, impatience with dogma, a willingness to take (calculated) risks. In many cases, an aesthetic sense that deems existing solutions inadequate and guides innovation. You can’t innovate in the abstract; every innovation is a response to problems.

I think idealism often plays a part. Individuals who are just out for themselves are too easily diverted in short-term money-making schemes or corporate climbing.