Several things delighted me during last week. The nature of delight for me is discovering that something is *possible*.

So, I believed some things were impossible (or just didn't know they were possible), but indeed they were! It's like the famous 9-dots puzzle by Samuel Loyd, where you need «think outside the box».

Numerical integration of continous functions by using its values in some points. In most of numerical analysis courses, estimation of the error is done in the case of once (or, better, twice) differentiable function. It's clear that well-known methods also work in the case of arbitrary continuous function, but can we get error estimation?

The first guess was «no». Indeed, even if you take very dense lattice, arbitrary continuous function can behave «very badly» between lattice points. I asked this question to prof.Vityuk (he teachs us numerical methods), and he also didn't know the answer, but next time he brought a monograph by Bulgarian mathematicians Sendov and Popov, which covers this topic among others.

This wouldn't be a surprice if error estimation involved some complex methods. But it uses the method which I (thought I) knew — modules of smoothty. And when I saw it, it became prettu obvious — it's exactly what was needed! Some characteristics which would replace derivative for a continuous function.

Second delight was a connection between the two areas of science. It's usual in science, when some area applies methods of some other area. For example, a branch of number theory called analytic number theory (which deals with integer numbers) is based on the theory of analytic complex functions. And that's no longer surprising, as long as you know it.

But «Applications of the Mellin-Perron Formula in Number Theory» again impressed me. I see computer science as my vocation, and I write my term paper in the analytic number theory (particularly, Dirichlet series), but I couldn't imagine that one can make a connection!

The last is about lazy programming languages. I learn Haskell for some time and finally I get used to its non-strict semantics. I even used it when dealing with infinite lists or other infinite data structures.

But solution of rep_min problem from the paper «Designing and Implementing Combinator Languages» amazed me!

Just look at this code:

(m, r) = (cata_Tree merged_alg t) m

What's going on? We use m at right-hand side as argument to function, and it's calculated in the left-hand side of the same equation (using pattern matching)! But Haskell isn't like Prolog, it can't solve equations. It's all about

`(cata_Tree merged_alg t)`

function, which doesn't use its argument to calculate the first element of the (m,r) tuple. This example is much more related to laziness of Haskell then all those infinite data structures, and it's really impressive.

## No comments:

Post a Comment