Two hundred kilometers per hour

by Sebastien Mirolo on Thu, 17 Mar 2011

I can feel my mind started to work at 200 km/h again. Hmmm maybe 125MPH... I have to admit I still have a hard time with the complexity of the US customary system, plus 200 is a bigger number - more impressive.

Let's Go parallel programming

This week I am reading, reading a lot, about varied subjects. Outside documentation for diverse tools (iptables, fail2ban, lire, dm-crypt and luks), I have also started to spend time looking into the go programming language once more. It seems a lot of people involved in the go language were previously very much involved with the plan9 effort at Bell Labs so there are plenty of background and design information to look at from that angle. The rationale and arguments supporting the go syntax sometimes get very fuzzy. The syntax was clearly geared towards efficient mechanical parsing of source code and it might have been better leave it at that. As the language evolves, shortcomings and common mistakes pop up, I suspect the syntax come much closer to include more elements deemed "too complex" from other languages. Some of the language semantics are worth noting because they reflect some interesting developments in software thinking. First, the type system strives to cleanly separate interface from implementation and let the compiler mechanically bridge both. As a result, all dynamic dispatch happens on interfaces and there is no type inheritance. Parallel execution is specified (i.e. declared) through go-routines. A set of go-routines can be executed in parallel if processing power permits it. The go runtime decides that. Synchronization is done through channels which can be thought more or less as lightweight unix pipes.

You might wonder why another language? Well the jury is still out on this one. Most of the ideas might have been retrofit into other language extensions and libraries. As the go programming model catches on, maybe they will.

Digital geometry

If you have a masters in Mathematics and are looking towards computer science or if you are a programmer that has a keen interest for algebra, topology and geometry, I recommend you open one or all three of the following books: Foundations of Multidimensional and Metric Data Structures by Hanan Samet, Digital Geometry by Klette and Rosenfeld and Multiple View Geometry (second edition) by Hartley and Zisserman. There are definitely advanced mathematics in those books and the related computer applications have a definite cool factor to them. The Microsoft Kinetic and Google Goggles are examples of things that can be done by people that do understand both, mathematics and computers.

Trading and markets

In a different style, the stock market also makes heavy use of mathematics and computers. There is something fascinating about High Frequency Trading as well as trading derivatives. The profit hunter by Neil DeFalco reads like a book about poker strategy. Once you finish reading Guide to analysing companies (fifth edition) by Bob Vause, I definitely recommend you pick Neil DeFalco's book. You will need a strong heart and a cool head to start trading derivatives. Some days you win and some days you loose, just remember trading is a poker game where everyone has a different number of cards, plays with a different set of rules and most of the opponents Today are complex computer systems.

RSS feeds and digital newspapers

Many trading computer systems aggregate, classify and act upon news feeds but they are not the only one that can benefit from advanced feeds processing. For example, I subscribed to rss feeds from techmeme, slashdot and osnews. Many times the same article is linked from many sites and appear in all three feeds. At times, some subject pick up interest from many bloggers and subsequent analysis appear (a few times again) in those feeds. It would be great to be able to get all items show up once in my aggregated feed, visually grouped around the original post.

Presentation is always important. So far most digital newspapers web site are a straight transpose of the paper version. Research in automatic rich layout of feeds and user interfaces around reading news items is only starting to show the first exciting products (flipboard and zite for example). If that is an interesting subject to you, I recommend you follow the Monday Note by Frederic Filloux and Jean-Louis Gassee.

Authentication, anonymity and peer-to-peer networks

Speaking of news, the recent events in Tunisia and Egypt have generated a lot of ink (most digital ink) about how twitter and facebook have been used to organize communications outside governmental channels. The quite drastic cut-off the Internet steps taken by the Egyptian government has also shown that central nodes play a more important role than some people would like to believe.

If you talk to a friend face to face, it is straightforward to tell you are in deed talking to your friend (authentication). If furthermore you pick a secret place to gather, there is little chance of someone eavesdropping and testifying you both met (anonymity).

On the Internet, things are different. It is like both, you and your friend stand on each side of a closed door, communicating by sliding printed notes under the door. Authentication becomes complex. Is the person on the other side actually who you think it is. Hell, like Alan Turing, you could even ask yourself, is it even a human being on the other side?

Anonymity is also difficult to manage on the Internet. Imagine you send a letter to your friend through the post office. It needs an address to be delivered correctly, maybe a return address if you expect an answer. The postman will know both of you communicate with each other even if he never opens your mail. That might be enough to get you suspected, arrested and deported. People that use tor technically rely trusting multiple intermediate delivery guys. As its real world counterpart, it is still a solution with many issues, primary what if an intermediate gets corrupted?

The technology advances of the last decade have enabled wireless mobile peer-to-peer networks. Wireless broadcasting makes it harder to know the intended recipient of the message and many times its exact physical location. Open peer-to-peer networks will route your own traffic through multiple nodes and others traffic through your own device, making it difficult to guess what messages are intended for you again. Notebooks, mobile phones and tablets means the network is constantly reorganizing, preventing aggregation of traffic through central choke points.

The major advances in technology, science and human rights have always run against the establishment. Today the only safe way to ask questions without running the risk of that quest being labeled "subversive activities" is to go off-grid. Hopefully in the future, through projects like freenet, it will be possible to buy enough time on-the-grid to learn, stay informed and build one's opinion before the "proper education" brigade shows up on your front door.

The Internet was designed as an efficient communication system that could resist a nuclear war. Not much attention was paid to free speech nor privacy in that design. None-the-less, its potential for efficiently sharing information across the globe has changed communications, sweeping governments and altering power structures forever. The playing field has seen a major earthquake; the battle for knowledge remains.

System forensic analysis

Next week I will start again hacking on fortylines server again. I plan to get fail2ban and spamassassin working together with the idea to prevent comment spambots to hit my web site so hard. As many times before, I will use the Ubuntu package manager to install the applications and then figure out how the daemons were configured by default, where are the config files, the log files, etc. That would be so cool to have a system forensic analysis tool that could show for any installed system, be it Ubuntu or Redhat based, a simple table such as

running daemonconfigslogsconnected to: through
apache2/etc/apache2/httpd.conf
/etc/apache2/site-enabled/website.conf
/var/log/apache2/website-access.log
sshd/etc/ssh/sshd_config.conf /var/log/auth.log
fail2ban/etc/fail2ban/jail.conf /var/log/fail2ban.logsshd: /var/log/auth.log

Maybe such tool exists, I have not looked much into it yet.

by Sebastien Mirolo on Thu, 17 Mar 2011