Peter's blog

blog about computers, science, and computer science

RSS Feed

Is C++ a niche language?

Comments Off

This is a review of an old blog post. I actually changed it quite a bit, turning the meaning 180 degrees. :)

C++ was (and still is) my bread and butter for many year. I know it pretty well, wrote tons of code in C++, and have seen even more of it. My main conclusion from over a decade of experience is, that C++ is a great tool for those who know to program. It’s a great tool to create spaghetti code though, for those who knows the language, but don’t know really how to program. By programming I mean building piece of software that works and is maintainable, and doesn’t require unreasonable amount of resources in the process.

Linus Torvalds believes C++ sucks, saying: It’s made more horrible by the fact that a lot of substandard programmers use it, to the point where it’s much much easier to generate total and utter crap with it.

I respectfully disagree with the Maestro. It’s not C++ which sucks. It’s not violin’s fault that there are so many bad violinists around.

C++ is the most flexible programming language, it gives one the ability to control your object memory layout to the level of bytes and bits, and yet to use such high-level mechanisms as polymorphism, exceptions, and meta-programming. In a sense, it’s allowing you to work on high-level design and yet to be able to super-optimize, when necessary. (What a potential for screwing things up! :) ) I love this language, and I personally wrote quite complicated pieces of software in it.

Another point of criticism might come from totally different direction: why the heck it took so long to C++ committee to introduce the range-based for loop? This one:

for (int i : vec)
    cout << i;

It’s because C++ lacked “syntactic sugar” for bunch of standard idioms, it was becoming obsolete in this new rapidly changing world.

I would take one step further, and would put a “blame” on the very foundation of C language which is:

  • Very simple syntax
  • The rest can be found in libraries

Modern programming languages (C#, Python, Ruby, even not-so-modern Perl) rarely follow this pattern (not to mention the fact they all are VM-based). Their standard libraries change more often, and you would be lucky if they are always backward compatible. Their syntax and built-in data structures are more complicated than ones found in C or C++. Bigger learning curve is the price that programmers pay, but after that the development time is significantly reduced. Moreover, most employers nowadays prefer to hire people proficient with specific platform or framework (or even specific combination of the latter), rather than just a language.

Given all this, C++ seems to be in decline. Basically, it’s becoming a niche language for applications requiring good performance, like high-frequency trading, real time, etc… But even then some projects willingly sacrifice performance by moving to C#, for the sake of maintainability.

In some fields, however, people are so determined to optimize their performance, that C++ is not only becoming the only option, but they even try to write their code in pure C, for the sake of “not messing up” and, well, speed of compilation. All that made me really think about the differences between C and C++, which I will probably cover in the next blog.

Annoying NTP attack exploiting IPMI

Comments Off

The-Big-Bang-Theory teaching physics

It’s been a warm summer evening in ancient Greece…

Actually, it was a cold snowy morning in modern Midwest, when my phone rang a bell of an email arriving. The email (forwarded to me by my colo service) stated literally the following:

A public NTP server on your network, running on IP address XXX.XXX.XXX.XXX, participated in a very large-scale attack against a customer of ours today, generating UDP responses to spoofed “monlist” requests that claimed to be from the attack target.

Last time my websites got hacked occurred years ago, and this particular server of mine was never hacked before (spitting over the left shoulder, knocking the wood, and blessing the fact my cat is red, and not black), so I just logged onto my little colocated server, made sure my NTP service is not running (is not installed on that box at all, to be precise), and thought it might be some sort of mistake. Talked to the colo support, they seemed to be as puzzled as I am, because the complaint was not confirmed the next day, and they decided to just close the ticket.

A week later though, the story repeated itself, but it was much worse this time: I’ve got a call from the colo services informing me that I over-used my monthly traffic quota (which is huge, BTW, and I never used even 10% of it), which makes me in their debt for over $1000. At the same time, the ticket mentioned above was re-opened, because my colo services received another furious email from the same source.

It’s a totally different story how I found my way out of it, but I have to mention that my colo company really did they best to help me with it.

But I still had a hacker attack to investigate and to protect myself against.

The original complaint was about my NTP server, specifically (presumably) its “monlist” option.  The NTP attack started to spread around Dec. 2013, and still seems to be around (read here).

A recommended cure for that is to upgrade NTP server to the most recent version, which doesn’t have that “monlist” option.

Well, here is the thing: I don’t have NTP server running at all. I can’t totally rule out the possibility of my Ubuntu account being compromised, but the logs didn’t show anything of this nature.

It probably was a lucky guess which allowed me to fix the problem in no time… I could have been spending 4 hours on it as well. What got compromised is my IPMI account (remote management for Super-Micro computers). I’ve found that I can’t login into it, plain and simple. Luckily, resetting your IPMI can be done with ipmitool which can be found in standard Ubuntu repository:

modprobe ipmi_devintf
ipmitool -I open user set password 2 ADMIN

And if loading just that module is not enough, you can load more related modules which probably were not loaded by default:

modprobe ipmi_msghandler
modprobe ipmi_devintf
modprobe ipmi_si

and than call ipmitool

Then I simply had my password reset and disabled IPMI NTP entirely.

I went and checked my colo traffic repots…


Holy smoke! And of course, it wasn’t the IP my box is sitting on. It was my IPMI IP.

Frankly, I never can understand the trend of adding features just for the sake of adding features… Exhibit A: NTP service as part of my remote computer management. There must be some reason network people and system administrators want this feature, but to me it just looks like another potential vulnerability.

BTW, this DDoS attack seemed to be really global:
Here you can read more details about what really happened. I wonder how many computers were hacked around the world…


Arduino Leonardo clone from Borderless Electronics


So, I’ve received my $9 Arduino, which included all the perks from Borderless Electronics (some resistors, LEDs, push buttons, and some other stuff).


First and foremost, I’ve got Arduino up and running in literally 10 minutes, most time spent on downloading and installing the programming tool.

Connected it to an LED and a button, wrote a simple program, and all worked right away.

But here is the best part about it: it uses USB (USB-A to micro-B) as its programming connection, and it can feed right of it! Meaning, it should work with any standard cell phone 5V power supply. With the cost down to $9, standard power connector, flexibility in power supply (from 5V and up to 12V, it seems), twice smaller than Raspberry/Pi (actually R/Pi is much bigger than a “credit card size”) – this sounds like a serious competitor to the platforms I mentioned before (mainly PICAXE). Man, that micro-B USB connector, which serves as a programming port and also 5V power supply – that is totally awesome. This is how things should work, indeed!

I was urging to put my Arduino to work, so I started to think about a practical project to use it.

I’ve been planning some automated light switches around my house, and I wanted it to be tunable for different places in my house, so didn’t really want to go with standard outdoor motion sensors – those are neither cheap or tunable.
arduino-attiny85At the same time Roger has drawn my attention to the possibility of programming smaller ATtiny chips with the same Arduino code (Youtube:, but you can get the idea faster from here: In a nutshell, you use Arduino board as a programmer for ATtiny chips. (You can buy a separate programmer, too – but it sounds less cool, I guess.) Aha, this is much better than using my all nice and flexible and powerful Arduino Leonardo for 2 to 4 pin project.

ATtiny85 can be compared to PICAXE-08M2, I guess. I was told ATtiny85 can be found for below $1, while 082M costs almost $3 ($2.66 from Sparkfun if you buy 10 or more… and I’m just about to buy 10 of either for my light automation project). Features like I2C and serial port are nice to have (for debugging), but not really important for my project. What is important though is the ability of on-board re-programming… for some reason I just like the idea.

Well, adding programming connector 1) adds some hassle to the design process (I hate connectors), and 2) takes 4 pins out of the game (or I will have to add some switches for Programming vs Operation mode). PICAXE, on the other hand, has pre-loaded firmware, which uses serial communication for re-programming. I could go with 8 pin PICAXE, but would have to take 14-pin ATtiny, i.e. ATtiny44 or ATtiny84. Also, I will have to design all the connectors myself, because there’s no standard (or at least I’m not aware of one) for ATtiny programming connector. Furthermore, I’d have to either buy a programmer or to create some connector to use one of my Leonardo boards… a lot of effort spent on boring things. Hmm… PICAXE definitely wins on-board programming contest.

Another thing is, Arduino was made a standard board with connectors and stuff because it is, well, much easier to work with something well standardized. PICAXE connectors are standardized as well, and the programming circuit consists of 2 resistors. I’m just too lazy now to design something similar for ATtiny (and frankly, I really like the idea of on-board programming interface; as a software developer I hardly can think about designing something I cannot re-program later on).

Although the difference between $2.66 PICAXE and $1 ATtiny (representing Arduino) is not really significant anymore (it’s not like one of those old $20 Arduino boards), it is not really important for a hobby project… convenience is. And for real world production, like I said before, I’d rather use that dirt-cheap bare PIC. Boomer! I’m ending up with my old favorite PICAXE… but I’m planning on using Arduino for something more complicated than that… maybe, some robotic project, or more complicated automation, like watering system in my garden, or maybe some project with a display and some buttons and such. We’ll see.

Filed under Electronics

To Arduino, Leonardo! $9 Arduino board

Clone of Arduino Leonardo from Borderless Electronics

Clone of Arduino Leonardo from Borderless Electronics

$9 Arduino

In my previous post I’ve argued about Arduino prices, basically making it my least preferable choice of micro-controller platforms.

Guess what, a friend and colleague of mine sent me this link today (thanks Roger!) , and it changed the whole picture! Arduino for $9?! Well, I realize it’s manufacturer price, and the retail seller doesn’t make any profit on it… but who cares why and how, it’s a $9 board! Ok, about $10-$11 or so if you order 2 or 3 or more, but including shipping price etc… Still, man, it’s pretty cool price for a fully functional board!

Now, this would definitely at least level the ground between Arduino and PICAXE. The price is closing the gap with PICAXE, although you still can get a PICAXE with more legs for that price — from a retail seller.

Of course there is a possibility that PICAXE will make their PIC-based stuff even cheaper… and this is good, that’s what I call competition and free market!

The Specs

So, if this is a clone of Arduino Leonardo, let’s take a closer look at the latter:

Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limits) 6-20V

Well, the de-facto operating voltage is 7 to 12… and their micro-B is for USB connection, not for power supply. Not a big deal though – my understanding is that for the price you get not just the bare board, but also some additional perks, power connector to 9V battery included.

Now, what’s most important for small hobby electronic projects? Number of controlled pins, of course! This little guy has 20 pins, which makes it comparable with PICAXE-20M:

Digital I/O Pins 20
PWM Channels 7
Analog Input Channels 12

Although PICAXE is still a bit ahead, and unless you are planning mass production of really cheap devices (where PIC MC still seem to be unbeatable) the difference in price between PICAXE project and this Arduino clone becomes really meaningless… I might be switching to Arduino soon.

For now I’ve just ordered two of these little guys… and we’ll see!

Some High-Level Alternative

Another cool thing, although of a totally different scale is .NET Gadgeteer. It’s a high-level gadget tool kit with a display (!) and joystick (or two?) and stuff like that. For rapid prototyping, or for kids who don’t have patience to do it slowly from the low level. :)

This page describes it pretty well.



Filed under Electronics

To Arduino or not to Arduino


This is going to be a very non-mainstream statement.


Don’t get me wrong: I think Arduino is super-cool. Its main programming language is C, and I like that. The whole idea of modules and shields is also something I can only praise. It’s open-source, which is the hit of the season.

Arduino Uno

But let’s face the sad reality. For reasons that escape me (it has to do something with the economy, but I’m a geek, not an economy guru – otherwise I probably would be much richer person) physical things nowadays cost more than electronics and engineering work and stuff. If you don’t believe me – check out Arduino. One Arduino board would cost you $15 at least (typically around $20 – $25), and it would not include Ethernet connection. Actually, Arduino Ethernet shield might cost twice as much as the Arduino board itself, making the price totally unreasonable. Apparently, the very fact that something is produced as a single board, a physical thing, adds much more to the price than presence of a sophisticated chip on the board.

Actually, it points out an interesting thing. May be, it’s not the big fat corporations that “take all the profits”… maybe it’s just manufacturing costs are that high today (with respect to development / engineering costs), that the main cost of those boards is in the physical components. People can agree to work for less, if they don’t have other options; apparently, physical materials are less flexible than we the people… but nevermind.

Another thing that is not cool about Arduino is the power requirements. So, is it 5V or 7V or 12V? :)


Raspberry Pi

Raspberry PiLet’s compare it to Raspberry Pi, version B, which would cost $35 a piece or so, and would include Ethernet connection and normal scripting languages (Python, which is the standard, and Perl, which is the “glue and duct-tape of system administration”). Comparable (or even better? never bothered to count) number of pins, rather standard power supply (5V via microB USB, which is the new standard for cell phones). Communicates quite nicely with the outside world via old good TTL logic, and via I2C, too. Frankly, this covers the need of 99% of the people.

No surprise that R/Pi which emerged from nowhere last year is as popular today as Arduino (at least in Google Trends – and today is 7/7/13). Seriously, I don’t see any visible advantages to Arduino, but possibly the size of the smallest ones (like Arduino Nano), which would also have their abilities limited… and anyway, as technology goes on, R/Pi will became smaller (or will fit more things on the same board).

But wait. What if you don’t actually need that much of capability and flexibility. What if you just want to add some simple automation, say, turning lights on in your basement or laundry room when someone walks in. (First, you might use PIR sensor with a delay, without any CPU or MC, but let’s say you want something more advanced.) Or you build a robotic toy for your kid, or counter of hits for your basketball hoop, or something simple of that scale.



In this case, let me introduce much less known PICAXE: an old good Microchip’s PIC (different versions, different numbers of pins, same platform, same programming environment). It’s dirt-cheap (not as cheap as bare PIC, but still cheap: $4 – $5 for 14- or 18-pin version, and it might be as low as $2-$3 for 8-pin version.

Here are the Con’s though:

  • You have to solder the board yourself. It’s very simple (two resistors and 2.5 mm audio jack which is used as a communication port to connect it to your PC, and then add the pins that you need). It would add at least $5 to your project, and if you didn’t solder any electronics before, you’ll need to purchase soldering iron and all kinds of supplies. (I do it, and I enjoy it, and I have the supplies already, so building another PICAXE-based project for me is rather fun, but your mileage may vary.)
  • You can purchase some “starter pack”, which is going to make it more expensive.
  • The programming is in Basic, which is not cool. Frankly, I don’t see much difference whether it’s Basic or C, while you are limited to single-threaded model with pins and no floating PICAXE-18X oint and such… it’s all the same. It’s like VB.NET vs C#/.NET – for 99% of the cases, the differences are minor, both languages cover pretty well what the platform has to offer.
  • The programming cable costs about $25 – $30, and this is not good if you are just planning to build one simple project.

You need to have some basic knowledge in electronics, to not forget to connect some pullup resistor… but wait, you’ll need that while connecting sensors to your Arduino, too! :)

I never needed to use 40-pin version, but it sounds like they could save you couple of bucks (adding some programming time) if you need to control some devices with a lot of inputs (like displays with no serial adapters). In this case PICAXE might come even more handy than Arduino (which definitely has less pins) or even than R/Pi (not sure about this one though.)

The 40-pin version would cost around/over $10, but then I’d say, if your project is that big, use R/Pi! :)

Oh, and it feeds itself from 5V. Connect it to any power adapter with USB cable… it doesn’t have to be micro USB, just grab a regular big USB, which you can find on eBay couple of bucks for 10 pieces or so.


Arduino NanoThe Niche

Somehow Arduino falls between PICAXE and R/Pi, while the niche is not that big. Like I said before though, Arduino Nano has its own niche of <Raj voice on>tieny-teeny<Raj voice off> projects, which is a niche, indeed… for now, because the size of everything in the world of electronics is shrinking.

Overall, I’d say, even PICAXE is a niche product now, because frankly, the price of fully soldered PICAXE board with a lot of pins and stuff becomes rather closer to $20, while it’s not a fully functional computer… which R/Pi really is. PICAXE still works where you want to save couple of bucks, or just want simplicity… but it’s marginal. As the cost difference (seriously, $10!) becomes less significant for the enthusiasts, it will make even less sense to use PICAXE… or micro-controllers in general.

Arduino, like I said before, is not really competitive in terms of pricing. Presumably, it might be used by those who wants to build a project prototype and then put it to real production, but c’mon – those who want to really minimize their costs, take bare PIC’s from Microchip. :) You can get PIC samples from Microchip for free, how ’bout that!

Overall, methinks, the world of micro-controllers is shrinking. I guess they’ll survive in specialized but standard tasks, like driving 14-segment displays or stepper motors and such… But I don’t see any reason why your R/Pi-based microwave oven shouldn’t be able to send you an email saying “food is ready, boss!” — which you programmed yourself, in Python.


Filed under Electronics

Old Notebook, New Ubuntu (12.04 LTS on Dell Inspiron 6400)

Comments Off

Hard drive on my 6 year old laptop finally died, and I’ve decided to not restore an old Windows XP installation, but to go ahead and try the Ubuntu, which I only used before either as servers or as specialized computers (like media players / internet stations / etc), where not much of desktop user experience was necessary. Now we are talking about my personal laptop, where I spend most of my time as a user/blogger/etc, not as a programmer (I have another computer for work).

The laptop is Dell Inspiron 6400 and it has been in heavy use all these years — this is second time I’ve replaced the hard drive. An old work horse.

I’ve picked up Ubuntu 12.04 LTS; although 11.04 LTS would probably make a better choice for hardware that old, but Ubuntu 13 was on the horizon, so I though, the newer the better… nevermind. There was one thing that doesn’t allow me to make a claim Ubuntu as “out of the box” solution for that Inspiron, and that’s the wireless card, which didn’t work right away. Everything else looked just fine.

The caveat: the Broadcom 802.11 STA driver suggested by Ubuntu just blocked the network card, and had to be removed. If you ever try to install Ubuntu 12.04 LTS on Inspiron 6400, don’t even try to go with this driver. Do the following instead:

sudo apt-get remove --purge bcmwl-kernel-source
sudo apt-get install firmware-b43-installer

Found this on some Ubuntu forum… and didn’t even bother to dig into the details. Apparently the new Broadcom driver was in conflict with another one pre-built with the kernel or something, so the old driver had to be removed and I’ve picked up a newer one. It worked right away, and I just left it alone.

I said before, I would be better off with 11.04 because it might be less demanding to the hardware… but maybe not. I’ve never looked into “official” h/w requirements, and they always mention just the necessary minimum. That notebook of mine is not the newest piece of hardware, so it becomes sluggish from time to time… mostly when heavy scripting environments become involved — and modern UN*X (Ubuntu and Mac included) are all about Python, Ruby, and of course Java. (Interestingly, wasn’t once wasting computer resources one of the main accusations toward Windows from UNIX people?)

And another thing. I’ve installed Eclipse with PHP support, configured it for XDebug, and it all worked like a charm. Well, almost… :) Had to link my project source to /opt/www, but that’s all.

Evolution is finally here. Linux for human beings, minimum configuration required, and it even can print to Windows network shared printer. (It appears, Windows 7 has severe problems with sharing printers, but this is totally different story…)

Ruby again: Sinatra and MVC discussion

Comments Off

Although Ruby is mostly famous for Ruby-on-Rails framework, it’s not the only one existing in Ruby world.

The tutorial I mentioned before suggests Sinatra as introduction to Web development on Ruby. I gladly went into the open door, and what have I discovered…

  1. Easy to install (just another Ruby gem, duh)
  2. Not total MVC, but separates views from the logic (see my opinion below)
  3. Can be easily (!) deployed with CGI (and I love the “easy” part above all) – no tricks, no configuration to mess up, no Apache modules

Now, who said MVC is a holy cow? After all, the separation between Controllers and Model is somewhat vague (which leads to designs like MVVC), while the separation between Views and the logic is essential. When I build my CodeIgniter sites (and CI is just a “by the book” MVC), my Models are merely convenience wrappers around CI DB manipulation calls. On the other hand, MVVM and MVP patterns were born because some web apps are so DB-driven, that Controller functionality is reduced to View Model or Presenter or whatever they call it (it might be just another name to name the same thing, but it also points out that in many case generic Controller turns into something more specific and close to either the Model or the View). The bottom line it, the separation of Views is mandatory, and Sinatra provides it, and then you can play with your Models/Controllers/whatevers as it pleases you.

For the beginning, you just start your Web app with ruby <your-app-name>.rb, as if you would start a normal Ruby script. This would be your development web server. At some point, Sinatra will produce a message like this:

[2012-07-30 11:44:36] INFO  WEBrick::HTTPServer#start: pid=24851 port=4567

Now you can start some browser on local box, and go to localhost:4567 — it will display your Web app.

To stop your development server, hit Ctrl-C, and it will say:

== Sinatra has ended his set (crowd applauds)


You can start creating your Sinatra app like described here, but I personally would suggest you to just put together the script and the Views directory. (Simplicity, remember?)

The “Hello World” start

Let’s take a look… Eventually it’s going to be the “Borg Attack” game on the Web, hence all the names.

Here is the code:

require "sinatra"
require "erb"

module Borgattack
  get '/' do
    greeting = "Hello, World!"
    erb :index, :locals => {:greeting => greeting} 

require “sinatra” — brings Sinatra framework, and takes care of all the web server / http stuff.

require “erb” — is for your page handling, templates etc. Higher level stuff.

Now, something that I hate: the name erb is cryptic. But after having “gems”, “bundles”, and “sinatra” itself, it’s not a big deal to complain about, indeed. :)

Here is something that I love though: that line

erb :index, :locals => {:greeting => greeting}

is easy, understandable, transparent by all means. We call that erb thing (whatever it is, it separates views from the logic), it should load the index with the dictionary of variables which are processed in your View (the latter is standard technique in CI, Django, ASP.NET — probably in in every Web development framework).

A View is a template. (Hmm, here is another prove the separation of Views is essential: templates became popular long before the MVC pattern. Ever heard of PHP Smarty?) And Sinatra Views templates have their syntax (template code comes almost entirely from Zed Shaw’s example):

    <title>Star Trek -- Attack of the Borg</title>
    <% if greeting %>
      <p>I just wanted to say <em style="color: green; font-size: 2em;"><%= greeting %></em>.
    <% else %>
      <em>Hello</em>, world!
    <% end %>

So, again, we have some typical template language inside HTML files. It’s standard technique, and therefore it’s easy! Zero learning curve! If you are familiar with this stuff in general, you can start developing web sites using this Sinatra thing almost immediately!

Handling Forms – easy!

Next cool thing in Sinatra: forms. They’re probably done in most elegant way in Sinatra than I’ve ever seen anywhere. Here is how it works:

  • define “get” method for the form => will show your form.
  • define “post” method => handles the submitted form. params dictionary handles the submitted parameters / filled-up fields

That’s it! OMG, literally — that’s it! You can do forms in Sinatra now!


  get '/hello' do
    erb :hello

  post '/hello' do
    greet = params[:greet] || "Hello"
    name = params[:name] || "Nobody"
    erb :index, :locals => {:greeting => "#{greet}, #{name}"}

So easy, so good!

You can concentrate now on the task at hand, rather than on “learning the framework” overhead.


Again, it’s really easy! Most simple websites use just one template for all pages. You just put another file to the Views directory, named layout.erb, and it will be picked up by default.

layout.erb code:

    <title>Star Trek -- Attack of the Borg</title>
  <h1>Star Trek -- Attack of the Borg</h1>
  <%= yield %>

Then you can strip down your pages of the header and footer part. E.g. your index page:

<p><a href="/hello">Say Hello</a></p>

<% if greeting %>
  <p>I just wanted to say <em style="color: green; font-size: 2em;"><%= greeting %></em>.
<% else %>
  <em>Hello</em>, world!
<% end %>

I have to admit, I’m totally ecstatic, indeed! The code appears to be much more concise than even I have in CI. You might seriously consider Sinatra for small, non-static website development. In other words, it seems to be a good competitor for CodeIgniter.

“We are solving really complex problems here!

Comments Off

O, complexity, the god of so many IT and R&D departments! Thou providest people job security, self-esteem, and sometimes very nice bonuses (that’s in the case of financial institutions).

Recently I’ve heard that a company I used to work for couple of years ago, conducted massive lay-off, closing whole departments which were working on “very complicated software”, with a whole “process” of SDLC and automated testing and a lot of other things… It wasn’t public company, all right, it had a private owner, and the owner just wondered at some point, what are all those people are doing, except of spending his money in a complicated economy like we have today.

I’ve been through projects with a lot of legacy code, and I’ve been starting projects from scratch, big and small projects, in companies big and small, on different platforms, OS’s, frameworks, etc. The same pattern persists everywhere: people just love to over-complicate technical problems.

People might have different motivation, of course: some just try to make their boss happy, others crave “universal” solutions, but the result is always the same: in couple of years, normal turn-over of human resources creates a situation when nobody really can understand what’s going on with a huge amount of code, databases, scripts, and so on.

Complexity is bad.

It’s very bad. It’s something that removes the substance and replaces it with a “process” of “figuring out” the substance (if any). Financial instruments called “derivatives”, all those CDOs and CMOs were way more more complex than stocks or commodities, and we all know what happened to banks which had built their businesses around those instruments. But reports those banks were presenting along the way were never bad. They were, well… complex.

There is another term for complexity in software world: we call some designs “over-engineered”. In many cases the over-engineering is caused by an attempt to make design more flexible, universal, and thus is becomes very configurable and flexible and customizable… but 90% of that customization is never used.

In other cases, excessive complexity comes to existence, when the original design wasn’t well thought through. This is actually the more “classic” scenario, described in all the smart software engineering books, and as such, it’s not seen too often in serious big projects.

And complexity is created by people. Same people also fail to figure out later on, what that complex solution is doing. Managing complexity has very little to do with computer science and even computer engineering. It’s 90% psychological problem.

Typically, engineers have good intentions. They want to get their job done really well. They are trying to foresee any scenario, anything that might go bad. That’s why they try to keep things flexible. And to keep it flexible, they need a dynamic object management, and some universal platform running their business logic, and some tools which are “industry standard” for that platform, and so on, and so forth… Sooner or later it leads to spaghetti code or spaghetti configuration, or (most likely) to both.

My bottom line is: first, the complexity is bad, and second, complexity is created by people.

Some Rules of Thumb

It’s probably not enough to proclaim the holy principles of DRY (Don’t Repeat Yourself), KISS (keep it simple, stupid!) and YAGNI (you ain’t gonna need it!). We are serious people, all right, engineers, programmers, we are educated and all this, so let’s take a look on some concrete recommendations:

1. Solve Only the Problem At Hand. Don’t try to solve the problems of the entire world. Don’t try to look too far in the future (well, unless this is part of your requirements, of course). Most businesses today are short-sighted, and so are their business needs. If you are late because of planning something for “tomorrow” — well, there might not be any “tomorrow” for your project.

2. Use Simple Programming Techniques. It’s quite simple, all right: it should be a good reason, to use complex over simple, custom over standard, and so on. If you want to use multi-threading, double-check whether you really need it. If you want to create additional interface or implementation, to derive another class, use another level of encapsulation — make sure it’s really necessary.

  • avoid multithreading, unless the task at hand requires it
  • avoid adding extra levels of indirection
  • avoid adding classes
  • avoid adding functionality, unless real business needs demand it
  • avoid creating another table in your database
  • avoid creating another field in your table
  • avoid… just anything! be lazy! minimize the amount of your code!

3. Keep Your Design Simple, Your Components — As Detached As Possible. Meaning, while solving task at hand, don’t forget that it might change, if your clients change their mind. You won’t have a problem to “refactor” your code at any moment, if it may be easily decomposed to smaller components.

4. Neither DRY, KISS, or YAGNI are absolute principles. The Great Dao of Laziness is. If you can make something small, concise, serving its exact purpose – do it. Most complicated designs with “future needs” in mind end up so complicated that they don’t work for those future needs either, and then the whole thing requires some huge re-factoring to be performed. Remember: impossible to see the future is! (C) Yoda

And the last, but not the least:

5. Think twice before using 3rd party component. Use it only if its learning curve is minimal.

And of course, if you have on your agenda something which is not simple, elegant, easily maintainable design — feel free to print a hard copy of this article and toss it to garbage.

On C++11

Comments Off

When you have a lot of work to do, you are typically trying to avoid unexpected factors in your code. That’s why I’m a bit behind on C++11.

Well, I used couple of new things, like the new meaning of auto keyword and unordered_map (instead of old hash_map), but that’s about it. So I’m finally taking a look on the new features of C++11 (formerly known as C++0x).


There are so many good things to say about C++11. Let’s start with what I believe is the symbol of C++0x/C++11, which is the lambda functions/expressions.

vector<int> v={10,22,11,3,14,9,4};
sort(v.begin(), v.end(), [] (int x, int y) { return (x < y); } );
string delim = "";
for_each(v.begin(), v.end(), [&delim] (int x) { cout << delim << x; delim=", "; });
cout << endl;

BTW, I’m totally ecstatic that you can use the << operator with std::string now. (It’s been a pain saying s.c_str() every time.) Also, I find lambda functions a cute mechanism, which might improve productivity for those who knows they are doing… and create more potential for spaghetti code by programmers who, well, know a lot of language features. :)

Also, I’m finally going to take more use of the <algorithm> header. :) To be honest, I rarely used all those convenient algorithms just because it’s been a pain in the neck defining a functor every time.


Next nice feature: decltype is a logical continuation of the new meaning of auto keyword. You just grab the type you need, even without an assignment. It’s probably hard to come up with a real-world example of using it though…

default and deleted

In my humble opinion, these two new keywords are just syntactic sugar, and their only purpose is to replace empty functions and prohibiting members by making them private out of public. On the other hand, why not.


No, it’s not a keyword. Finally we’ve got nullptr and constructors delegations (this is something I craved for a long time!)

Constructor delegation is, in simple words, ability to call one constructor from another. I wanted to have it for a long time.

nullptr helps to distinguish between foo(int x) and foo(char* x). And yes, it’s a big deal of difference.


These are some simple and nice features found in C++11. Next time I’ll try to take a look on rvalue references and move semantics. Since C++’s primary niche today is high-performance system design, move semantics (as opposite to copy semantics) sounds very promising.

Ruby Structures and Lieutenant Commander Data

Comments Off

Unlike C, structures in Ruby can be generated on the fly, and unlike Perl, they still will be different from the Dictionaries.

Indeed, we are dealing with a higher level language here. There is no reason to worry about bytes and bits and CPU cycles. It’s all about usability, simplicity, maintainability, and so on.

Ruby Approach to Structures

So the whole idea of structures being in fact dictionaries with predefined keys sounds just right. After all, we access structure fields (as well as class data members) by name, so why should it be different than just a dictionary? On the other hand, when you deal with a structured data, you want some sort of predictability, in other words you want to know what kind of fields your data object might contain.

Here is Ruby approach:

# Struct is some sort of Class generator. It generates classes, and classes
# generate instances/objects. Very OO!
PersonData = (:firstname, :lastname, :address, :zip)

# Now PersonData can instantiate an object / an instance of PersonData class:
joe = ("Joe", "Schmo", "12345 Bedford Avenue, Brooklyn, New York, NY", 11211)

Doin’ It On-the-Fly

We call Ruby or Pythonscripting languages just because they are interpreters. But thanks to JIT and similar techniques, the border between strictly compiled and strictly interpreted languages is fading. C/C++ are strictly compiled languages, but how about Java and C#? They use JIT technology, and apply some optimizations in run time. PHP, Python, Perl? Pre-compiling the code behind the scenes.

See, the reality is that computers are still getting faster, and they are already fast enough so the difference between “script” and native binary wouldn’t really matter for many applications. And interpreted languages have one distinctive advantage compiled languages have not:


A program written in interpreted language
can re-program itself.


That’s why the structures in Ruby can be defined during run time. Java’s and C#’s reflection (or whatever it’s called) is just a baby step toward self-programming programs. In C# or say Python the program can write a source file and compile/load it. Ruby takes some further steps in this direction: a program can define its own data structures. It can become adaptive to environment.

Of course, anything done in interpreted language can be done in compiled language too… it’s just much harder to do. On the other hand, an interpreted language with built-in abilities to do most of it work in run time (load modules, define data structures etc) is leading the way to programs which write other programs…

In this part we should start speculating whether robots will turn on humans some day… but of course they will. They are built to.

I guess, the question is, will a machine built and programmed as “good” turn on humans some day… I bet it won’t. There is a limit to self-programming flexibility of any program. Hell, there are some limits to what certain living person is capable of doing, and it depends a great deal on programming, i.e. education and social environment.


As I said before, when we can manage complex mechanisms simpler way, there is more potential for effectively handling more complex problems. In other words, there are more chances of ever implementing Lieutenant Commander Data in Ruby than in C.

Let’s C… oops, let’s see:

Data can reprogram his own code. You can do it in Ruby much easier than in C. Running Ruby program means, you have Ruby interpreter on the same machine, which means you can compile modules on the fly. Running compiled C program means nothing.

Data can learn new categories and even new abstractions. In Ruby, you can define new structures in run time, and it’s native in the language. In C you just can’t do that. In C++ (and also in C#, Perl, etc) you can imitate the idea of dynamic structures with hashes and dictionaries, but again, it’s a lower level of abstraction, Ruby makes it easier.

Data can run self-diagnostics. Ruby modules easy to be tested and fixed one by one; go and do it with a system written in C or C++.

Well, you’re getting the idea. ;)

Back to the real world… Lessons 48 and 49 of Zed Shaw’s book show how to build a simple language parser in Ruby. I hope there is some good NLP library out there. Just google Ruby NLP… Also, depending on how complicated the problem is, you might try integrating libraries written in other languages.

Filed under Scripting