## Saturday, December 7, 2013

## Wednesday, November 13, 2013

## Friday, October 25, 2013

### Dedication!

In "The Lady Tasting Tea" the author (David Salsburg) talks about calculations done in a Statistical paper published in 1920s by R. A Fisher. Mind blowing to think that he used this machine to do the tables and charts in the paper:

The author estimates it must have taken him 185 hours to prepare the tables.

The author estimates it must have taken him 185 hours to prepare the tables.

## Wednesday, October 23, 2013

### Interesting use of probabilistic model

In these lecture the presenter describes a system of differential equation that is solved for a set of initial condition. The results are used to build a probabilistic model, using the distribution of values and the probability model for state transitions in a dynamic model. The dynamic network is then used to make inference on the system directly with out needing to solve the differential equation.

An interesting side note is that with deterministic differential equation describing a system (in this case a biological system) our observation of the state of the system is noisy and infrequent (once every 30 minutes). Further justifying the probabilistic approach over a deterministic approach.

An interesting side note is that with deterministic differential equation describing a system (in this case a biological system) our observation of the state of the system is noisy and infrequent (once every 30 minutes). Further justifying the probabilistic approach over a deterministic approach.

## Monday, September 30, 2013

## Tuesday, September 17, 2013

### Monads vs Objects

The examples in documentation of MonadRandom does a good job on how monads are useful. I have it cut and pasted here:

There is some correspondence between notions in programming and in mathematics:

random generator ~ random variable / probabilistic experiment result of a random generator ~ outcome of a probabilistic experiment

Thus the signature

rx :: (MonadRandom m, Random a) => m a

can be considered as "rx is a random variable". In the do-notation the line

x <- rx

means that "x is an outcome of rx".

In a language without higher order functions and using a random generator "function" it is not possible to work with random variables, it is onlypossible to compute with outcomes, e.g.`rand()+rand()`

. In a language where random generators are implemented asobjects, computing with random variables is possible but still cumbersome [.]as you need to wrap the generators (Composition Pattern) in object and perform the computation as methods of the new object

In Haskell we have both options either computing with outcomes

do x <- rx

y <- ry

return (x+y)

or computing with random variables

liftM2 (+) rx ry

This means that liftM like functions convert ordinary arithmetic into random variable arithmetic [Since you can't do lift in OO, you would need to write your own version of each operation you like to support]. But there is also some arithmetic on random variables which can not be performed on outcomes [Or would be adhoc in OO]. For example, given a function that repeats an action until the result fulfills a certain property (I wonder if there is already something of this kind in the standard libraries)

untilM :: Monad m => (a -> Bool) -> m a -> m a

untilM p m = do x <- m

if p x then return x else untilM p m

we can suppress certain outcomes of an experiment. E.g. if

getRandomR (-10,10)is a uniformly distributed random variable between −10 and 10, then

untilM (0/=) (getRandomR (-10,10))is a random variable with a uniform distribution of {−10, …, −1, 1, …, 10}.

## Thursday, July 25, 2013

## Wednesday, July 24, 2013

### Computational Oriented Programming instead of Functional Programming.

IMHO, the term "functional programming" doesn't do justice to the concept that lies behind it. I like to think of it as Computational Oriented Programming.

The idea is that you have computations (e.g. state-full computation, non-deterministic, or reactive computation). Each computation is a polymorphic type. You think of your computations as having:

The idea is that you have computations (e.g. state-full computation, non-deterministic, or reactive computation). Each computation is a polymorphic type. You think of your computations as having:

- Constructors
- Morphism
- Observers

Constructors are either when you build a new computation from external types (e.g. putting an integer in a non-deterministic computation -list), or they are higher level combinators that combine two or more of your computation type to a new instance of your computation (concatenating two lists) . Morphisms are when you apply a function to the values in your computation (add one to every element of the list). End result is the same computation but the values have been morphed. The key point in constructors is that the end result is the same computation type that you started with.

If you always stay within your computation there is no value to the outside world. So you need observers to export the result of your computation to other forms of computations. Caveat: the term observers here is not the same as the observers that is used in some of the FRP implementation.

So when you think of say Functional Reactive Programming you start thinking in a about the constructors and combinators that you would need to express an event processing. Your goal here is to define all the ways one can possibly construct an event handler. A key point in your design must be to define the algebra of combining the FRP computation constructors. The algebra would ensure that any complex combination of the constructors and morphism on the type would be coherent and consistent, otherwise you have a problem in your hand and you need to go back to the drawing board. Once the constructors and combinators are defined a user would be able to define the event handlers. Next you need to define the ways the user would be able to get the result of event handler computations to the outside world. When you are done you have created a new type of computation that due to its polymorphic nature can be used in variety of contexts. That in the essence how you do functional designs.

## Thursday, May 23, 2013

### Even more reasons not to use Scala

In a response to my post to Quora, where I recommended against use of Scala for any serious project, a reader posted this:

Let's say you have three functions (f,g, and h) that receive an integer and performs an asynchronous computation, returning Future of Int and you need to chain the three functions together (use the result of one computation as input to the next one). In Scala you'll do:

```
f(x).flatMap(g(_).flatMap(h(_)))
or with the "for" notation:
for {
i <- f(x)
ii <- g(i)
iii <-h(ii)
} yield iii
```

What appears to be a cleaver (or "cute") use of monadic composition is actually seems to be completely misleading. A closer look at the "flatMap" implementation in Future shows that:

```
def flatMap[S](f: T => Future[S])(implicit executor: ExecutionContext): Future[S] = {
val p = Promise[S]()
onComplete {
case f: Failure[_] => p complete f.asInstanceOf[Failure[S]]
case Success(v) =>
try {
f(v).onComplete({
case f: Failure[_] => p complete f.asInstanceOf[Failure[S]]
case Success(v) => p success v
})(internalExecutor)
} catch {
case NonFatal(t) => p failure t
}
}(executor)
p.future
}
```

Another word, your set of futures are no longer running independently and asynchronously (as you would expect in Future) they are composed sequentially in a single Future. If that was your original goal, then you should have composed the functions (in this case f, g, h) and run them in a Future. On the other hand when you do:

f(x).flatMap(g(_).flatMapyou must be thinking that the functions are running in parallel and working on each others output. But it seems (reading the code and not actually done a test, as I don't have the Scala dev environment handy) to me that "g" would not run at all until "f" is finished. Again not what you would expect when you are using Futures.(h(_)))

### Bayesian Learning Lectures

A set of very informative, not to be missed, lectures on the Bayesian learning by Dr Draper.

Bayesian Modeling, Inference, Prediction and Decision-Making

Bayesian Modeling, Inference, Prediction and Decision-Making

## Friday, May 10, 2013

### What is "high level" or "low level" or "functional"?

......what does it mean for one library to be more "high level" or "low

level" or "functional" than another? On what basis should we make such

comparisons? A pithy answer is given by Perlis:A programming language is low level when its programs require attention

to the irrelevant.

Alan PerlisBut how should we decide what is relevant?

Within the functional programming community, there is a strong historical

connection between functional programming andformal modeling.

Many authors have expressed the view that functional programming languages

are "high level" because they allow programs to be written in terms of anabstract conceptual modelof the problem domain, without undue concern for

implementation details.

Of course, although functional languagescanbe used in this "high level"

way, this is neither a requirement nor a guarantee. It is very easy to write

programs and libraries in pure Haskell that are littered with implementation

details and bear little or no resemblance to any abstract conceptual model of

the problem they were written to solve

Source Genuinely Functional User Interfaces

## Tuesday, May 7, 2013

### Meaning of NOSql and BigData for Software Engineers

The Term NoSql as is No SQL doesn't
convey the real meaning of the concept behind NOSql. After-all SQL stands for
Standard Query Language. It is a language of querying relations which is based on Relational Algebra and Relational Calculus. It is a query language
has no bearing on the kind of processing that NoSql implies. In fact
you can use SQL to query your “NoSql/BigData” as in Hive.

The Term NoSql as “Not Just SQL”
is closer to the meaning implied by NoSQL but still doesn't really
convey what the NoSQL is all about. It is not about what language you use to query your data.

Big Data is also not really meaningful.
Sure the size of data might be large but you can have NoSQL problem with
small data (at least in today's relative terms).

IMHO, NoSQL intends to say your data is not
ACID. ACID as in Atomic, Consistent, Isolated, and Durable has been the corner stone of the transactional databases. In "NoSql" you are dealing persisted data without strict grantees on
its Atomicity, Consistency, Isolation, and/or Durability. Another
word you have noisy data with duplication, inconsistency, loss. The
goal of NoSql is to develop software that can work with such data.
Even if you have small size but noisy data, the standard algorithms
that work on ACID data would not yield useful results. To them non-ACID data is garbage and you endup with garbage-in-garbage-out dilema.

A better way to think of NoSql is to
think of problem of inference about the underlying model in the
data, prediction on future data, and/or making decisions all using the noisy data (of any size). That is the problem that has been
address in the statistics community as Bayesian analysis. The
challenge for software developers tackling NoSql/Big Data problems is
to understand and incorporate statistical analysis in their
application. A good place to start on this are an excellent
encyclopedic write up by Professor David Drapper's Bayesian Statistics or his priceless in-depth lectures on
the topic Bayesian Modeling, Inference, Prediction and Decision-Making.

## Tuesday, March 12, 2013

### How I feel about Scala

I posted my feelings about Scala to a question on Quora.

Is there a programming language that's as elegant and supported as Haskell, and as practically useful as Erlang?

Is there a programming language that's as elegant and supported as Haskell, and as practically useful as Erlang?

## Friday, March 8, 2013

### When to use Applicative vs Monad

A nice rule of as to when you must use Monad and when you can use Applicative style:

How do you use Control.Applicative to write cleaner Haskell?

Update: May be another way to look at this is to think of >>= as combinator that applies the mapping function then flattens the result of application of (a -> mb), and (in some implementations) change the computation based on the value of the "a". In computations where the >>= is only flattening the types, it can be converted to an Applicative computation.

How do you use Control.Applicative to write cleaner Haskell?

Update: May be another way to look at this is to think of >>= as combinator that applies the mapping function then flattens the result of application of (a -> mb), and (in some implementations) change the computation based on the value of the "a". In computations where the >>= is only flattening the types, it can be converted to an Applicative computation.

## Tuesday, March 5, 2013

### Adding numbers as digits in a list

In a phone interview with an un-named company :) I was asked to come up with solution for hypothetical adder where input data are integers but are given as list of single digits. For instance 345 would be [3,4,5]. Problem is to write an adder for the numbers. I kind of like the simplicity of the solution in Haskell ( as oppose to my java-like doodle in the interview).

You can find my code Here

You can find my code Here

## Friday, March 1, 2013

### Refreshing my statistics

I am watching a nice set of lectures of Harvard's Stat 110 class.

Watch the Lectures

Professor Joe Blitzstein is great. One particularly puzzling problem is presented at 36:00 minute of Stat 123 teaser.

I have not been able to figure out the puzzle in the expected value of Euro and $. If you happen to have any ideas on this I would appreciate it.

Watch the Lectures

Professor Joe Blitzstein is great. One particularly puzzling problem is presented at 36:00 minute of Stat 123 teaser.

I have not been able to figure out the puzzle in the expected value of Euro and $. If you happen to have any ideas on this I would appreciate it.

Subscribe to:
Posts (Atom)