More parallel computing chips

Somewhat over a year ago I jotted down some notes on parallel microcontrollers. I hadn’t heard or done much since, but a few things have happened. I ended the note with a plea for more options, and today it was finally – albeit indirectly – answered. Slashdot picked up some PR from Intel regarding higly multi-core processors, and a comment regarding other brands mentioned two I had not yet heard of.

GreenArrays has started offering some of their larger chips for sale. They’re another product I suspect will be relegated to niche status and forgotten, which is really a pity as they have some very good ideas. The problems aren’t very complex, and not necessarily crippling. First, the whole design is based on the creator’s favourite language, Forth. It is a 1970’s language, and hasn’t changed much since. As such, the grand interactive development system is.. well.. like an 80s microcomputer. It simply doesn’t scale well, and that’s a problem when scaling is what it’s all about – they offer 144-core chips! The other drawback is the lack of communications routing, as all those cores must programmatically shuffle data between them (and yes, the entire layout has to be done manually for now). Finally, don’t expect a hobbyist foothold when only large BGA models are available, nor much of an industrial one while you’re the only source and porting costs would be immense. Where the design shines is in power efficiency, and it’s fairly impressive when it comes to speed and code density, but it just doesn’t seem enough.

Picochip multi-core DSPs fall in the hybrid chip category. They feature a reconfigurable section, but instead of the bit-level FPGA design they have a bunch of DSPs, while ARM cores handle the general purpose computing.

The Icera chips, on the other hand, I found no actual details about. It reminds me of Zii – there’s some DSP going on, but they won’t tell what.

The Zii Plaszma is actually being sold, with plenty of marketspeak claiming it’s revolutionary, but they seem more focused on making up analogies and buzzwords rather than admitting anything about the architecture or specifications. In fact, they’re so busy making these up that they’re outright lying about what other things do. Their marketing has convinced me not to trust them.

Around the corner – displays

I recently posted that I’ve ordered a so called E-reader. The particular features about that are all in the display; it needs to be readable in sunlight, and consume little power. That’s why the one I picked has an E-ink display. But as is the tendency of such things, there’s alwyas something better just around the corner – something not yet available on the market, or still just a bit too expensive. This post is a collection of the (hopefully) emerging display technologies I know of, with a focus on E-reader style displays.


I know, not really emerging when I’ve already ordered a third generation device, is it? Still, these deserve mention at least as a point of reference. The e-ink displays are purely reflective and stable, meaning any energy to display the image comes from light bouncing off it. Changing images, on the other hand, is a lengthy process frequently involving multiple passes (at least for grayscale). New driving chips has reduced this, but it’s still suffering from flashes to black and white. E-ink Triton was recently announced, and will add color capability.


Also on the market, and near identical to E-ink displays, SiPix claim to be the biggest in electronic paper modules. This is probably because they make custom models, only capable of showing specific segments. They do make active pixel matrix models as well.

Nemoptic Binem

This was the type I was mostly convinced to wait for. Of course, only days later the company went bankrupt, and I made my compromise. So what was so good about Binem? It came down to a few properties: Manufacturing technology as common TFTs, translucent bistable layer so it could be combined and backlit (demonstrated with OLED), and updates at video speeds without any flashing whatsoever. We never got to see what the cost might be, and I boggle at a world where this company could find investors hard to convince.

Pixel Qi

An active contender is the Pixel Qi displays. I’ve been wishing for one of these a long time, since they’re both full speed displays and feature a reflective mode. They’re proven technology, being a spin off from the OLPC project, yet they’re barely beginning to inch into the market. Currently they’ve launched a DIY kit for a few netbook computers, and we’ve heard of one tablet device that should have a Pixel Qi display. A peculiarity is that the display goes nigh monochrome when passive, which makes the subpixels more interesting for spatial separation, but this will only triple the resolution along one axis. I’d have considered making a non-square color pixel ratio to exploit the higher resolution better – video formats drop the color resolution anyway.


Probably the most promising development among passive displays, Mirasol uses the very property of light wavelengths to accomplish color. Not only that, it’s a bistable technology with very fast updates – although not yet at common video levels. This might have gotten me to wait, as there are now rumors that there’ll be an e-reader with their technology next year. Unanswered questions include what sort of resolution they’ve accomplished, as they’re literally bilevel and will require more subpixels per pixel to get gradual levels. On the plus side, this means they’ll be working on getting the resolution much higher than the intended pixel density, and I’m a sucker for true resolution. Let’s hope the controllers will exploit it.


This display is neither stable nor passive. Unipixel displays are pure active matrix, like today’s common TFTs, but operate with very rapid shutters. The idea is to perform like some color scanners, flashing whatever color base you need through the shutters (typically red, green and blue). This requires basically a distributed PWM controller, fast enough to give the desired color precision and wide enough to handle every pixel at once. It’s been done for OLEDs, but there you have the convenience to be the last visible layer. Basically it’s like a flat panel version of a DLP. The advantages of this method are that each pixel is placed in the same spot (no shifted subpixels), the shutter can pass more of the light than common panels, and since it starts with being fast there’ll be no ghosting. It’s interesting, but hardly the stuff to compete with stable displays on energy efficiency.


Another technology neither stable nor passive, but included for comparison. OLEDs use directly shining subpixels, which makes them very energy efficient for an active display; there simply is no backlight to be filtered. But it does suffer from the fact that all the picture detail has to be emitted from electrical power, meaning it’s impractical to compete with sunlight.


A few of these are used as e-readers. TFTs are divided into a few different technologies, and the Pixel Qi display actually belongs to this group – but most of the modern ones are limited to backlit operation. The legibility when the backlight is turned off is normally negligible. Apple’s iPhone and iPad displays are of the IPS type, which have better colors and viewing angles than most – I got an IPS panel for my stationary display. The cheap panels tend to be of TN type, which has horrid viewing angles – color tends to go nuts as soon as the angle is a bit off. This is what most laptops and TVs use, and you’ve surely seen the result when you tilt the screen just a little wrong. The same effect occurs when you get close – and since it differs for horizontal and vertical angles, you can’t expect decent performance if you turn to portrait mode. And to make matters worse, a lot of displays are built glossy or shiny. Great if you need a mirror, but that’s not the purpose of my displays.


A direct competitor to Pixel Qi, CPT have a sunlight readable display that doesn’t lose color quite as much when lit strongly from the front. I don’t know much more, as all I’ve seen are a few quick video clip of a fair booth, from netbook news.

I’m sure I’ve missed a few interesting ones. Please let me know which!

E-book reader

My gadget mania has struck again. As is my habit, I again selected basically the most expensive device on the market of some particular type. It started, pretty much, with finding a version 2 Kindle in a friend’s sofa. Neat device, I thought, and fiddled about with it for a while. It was tempting, but there are a few things I don’t like about the Kindle.

  1. Amazon control the device, not I. Sure, they have less control if I never go online with it, but half the point of a Kindle is their free data service.
  2. The resolution is too low. I read many technical documents, where 600×800 is just slightly too small to be practical.
  3. The contrast was just slightly disappointing. Not a showstopper, but annoying.

Still, I just couldn’t get the idea out of my mind – so I started looking about for alternatives. MobileRead hosts a wiki with a rather helpful overview. Turns out, there were a whole slew of promising demonstrations. Kindle DX exists, but has the same control issues and notably higher cost. Irex has closed. Nemoptic apparently went bankrupt. Brother SV-70 is apparently only for the Japanese market and only their own proprietary format. The Skiff reader had a clear advantage in resolution, but the whole company was swallowed by News Corporation (as yet not to be seen again) before release.

Somewhere along the line, the old idea that a stylus for handwriting would be nice resurfaced. The trigger this time was reading about the Onyx Boox, but for a measure of how far back the concept has interested me, I have an RS-232 connected inductive digitizer from Genius. It’s old, and not very flashy, but does work – although the last time I used it I had to patch the driver to work with recent Xorg. I’ve tried a few touchscreen devices since, such as the TuxScreen, Palm III, and Agenda VR3. There’s an important side effect of the most popular technologies (capacitive and resistive) – glare and reduced contrast, the very things the e-ink needs to avoid. The Boox 60 didn’t have a particular problem with this, because the inductive digitizers can be placed behind the screen. The downside is that a specific stylus is required, so these aren’t touch screens (apparently that’s what you have to call it nowadays to sell), but it makes up for it with higher precision and, again, no extra layer in front of the screen.

There’s one concern I haven’t yet mentioned. I’m a programmer, and like tinkering with all my gadgets to some degree. I absolutely loathe it when the manufacturer takes effort to destroy this option (hello, Sony), but if they elect to be helpful, that matters to me. One brand stuck out in this regard, a Ukrainian developer called Pocketbook. They have released SDKs and sources, and significantly, showed active efforts to maintain and enhance the firmware for existing models. The developer site, hosted at sourceforge, was filled with Russian discussion, but they are currently expanding – with offices all over the place, and a multilingual bookstore. They’re about to launch a few new models, and the top of the line – Pocketbook Pro 903 – is what I’ve ordered. The deal closer, really, was seeing no less than three active and helpful representatives on the mobileread forums.

So how well did this model stack up to my feature wishes?

  • Active updates from Pocketbook, flexibly overridable firmware, and an active developer community – they do have a DRM thingy, but they point out themselves that’s only because publishers demanded it. Control: Good enough.
  • Resolution: Turns out the nicer options (1600×1200) aren’t anywhere to be found. The 903 has the top E-ink model, at 825×1200 – same size as Kindle DX.
  • Contrast is, rumours have it, slightly worse than the new Kindles (“Pearl” is apparently the cream of the crop) but better than earlier generations, as the one I tried. It will suffice.
  • Inductive digitizer makes navigation easier and scribbles possible, without sacrificing legibility.
  • Connectivity is just overkill – Bluetooth, 3G+GPRS, WLAN. This thing can connect through my cellphone if its own SIM won’t do.
  • Memory is not shabby – 256MiB RAM, 2GB built-in flash, and a micro-SD slot for expansion, upgrades or experiments.

There are other details, such as a frankly impressive PDF reflow feature, but thus far it’s enough to excite me. Certainly there’s silliness about too, such as the “Pro” moniker present on all the new Pocketbook E-ink readers. By the way, the 603 model has the exact same features with a smaller screen, and the 902/602 differ by dropping 3G and the digitizer. All of them should run the same software, including all the add-ons.

Playstation 3 debacles

A long time ago I bought a Playstation 3 console. I specifically hunted down the first model released here because of the alarming rate at which functions were disappearing from new models.
Features that had been removed at the time, but not on my model, included: Linux support, half the USB ports, memory card reader, and Playstation 2 compatibility.

In April, Sony abruptly decided that was not enough. They started killing features – indeed, key selling points – of machines they had already sold. Models they had not made for years. Suddenly, I was expected to sacrifice the computer functionality as well as the contents of the hard drive, without explanation. This was a change they had vehemently promised not to do after concerns were raised by the less capable slim models. My refusal led to the specific sabotage of other functions, including online gaming, new games, and even the ability to use credit I had already paid for to get expansions for the games I own.

As that last was their strongest money maker, I know the credit thus stolen from me is easily outweighed by what they don’t let me buy. From a purely economic standpoint it is an obvious loss. Yet the few replies (they systematically ignore contact) made insane claims like this being an improvement for all PS3 owners.

About a week ago, the first “modchip” for the PS3 appeared. I do not think it a coincidence that this occurred mere months after Sony performed this attack on customers – after years of no such modification being around. Sony have tried using legal means to restrict the distribution of the accessory, which is doomed to failure as it was replicated in a matter of days. It is now possible for anyone to make one. Unlike the feature Sony attack, the dongle is not specific to the early PS3s.

Yet in all of this, I’m still waiting to get back what was lost. The new dongle permits running independent code, but as yet I haven’t heard of anyone getting Linux to run with it. And that is not a feature up for negotiation.

TestDisk & PhotoRec

Allow me to present you with a few scenarios, all of which recently happened.

    A friend intended to boot his Windows partition, in order to update a laptop BIOS. By mistake he picked the “recovery” partition, easily done when GRUB’s OS prober can’t tell them apart. Without warning, it erases his GNU/Linux partition, leaving him stranded without a functioning bootloader (it couldn’t be bothered to install a functioning MBR while overwriting that sector). Luckily, he has a bootable USB memory, but all the data he cares about is in the lost partition.

    Another friend is presented with a freshly erased memory card off a camera, from which photos need to be recovered.

    I wanted to extract the music from a Playstation Portable game I own.

This is exactly what the two tools TestDisk and PhotoRec help with. The first finds lost file systems, and the second finds lost files. Both are incredibly easy to use and should be in your disaster recovery arsenal. They work, in many situations (don’t be fooled by “Photo” in the name), and are quite free. This is why I cared not one iota when my Lexar memory card didn’t come with the promised Image Rescue software.


My question on camera selection, for the moment, has one answer. Unfortunately I can’t afford it, and it still needs a lot of work. The Frankencamera project now supports one consumer available hardware platform, the Nokia N900. There’s lots going against it, too, such as really slow switching between live preview and high-resolution photo capture, but they are things I could work on – if I had anything it worked with. I might look at using it with UVC, but it won’t be nearly as useful as the raw sensor access it has in the N900.

Literate programming

That’s right, I’m finally beginning to take my first small steps towards literacy. I’ve known of the concept for quite some time, joining documentation and program code together into a unified document, but haven’t really been using it. Sure, I’ve used plenty of automatically extracted API documentation, but rarely (if ever) written any. And today, I needed something slightly different – I needed a report on a programming project.

As with earlier reports, I fired up LyX, because I’m a sucker for easy interfaces. I’m not really at home in LaTeX, and had prior experience that LyX could make entry of formulae, tables and such easier. This time, though, I needed some circuits, state diagrams, and above all, source code. So, looking at the options, I found LyX now supports both Noweb and Listings. So I sat about writing bits, documenting the circuit using CIRC, and inserting code with Noweb “scraps” as LyX calls them. Pretty soon, this got me tired.

LyX provided me with two options for the source code: scraps, where I had to use Ctrl+Enter to get halfway reasonable spacing, and had no indentation or syntax assistance, or Listings, where code was reformatted for printing but not in the editing view. Besides, my CIRC drawing was just literal code anyhow, so LyX didn’t help very much in the WYSIWYG department. Even looking at the file, it was clear that LyX was just adding overhead – my document would be cleaner in Noweb directly.

Having written just a little code inside LyX, I now knew I wanted back to a proper programmer’s editor. That meant Emacs or Vim. Emacs did open Noweb documents happily, but the syntax highlighting turned out to be a bit bipolar. It was switching, depending on the cursor (point?), between TeX and C sub-modes, and reinterpreting the whole document each time – which destroyed the source context. I did find a simple workaround by using /* and */ in TeX comments, letting the C mode know the text wasn’t code. Not really a big deal, but I’m not used to Emacs, and this swapping (reminescent of per window palette switching in X) was annoying either way. Vim is usually my editor of choice, but it didn’t recognize Noweb at all. For Vim, I found a few scripts, and the highest rated one actually worked. It’s not perfect – it has a few hardcoded languages it can recognize within Noweb – but it’s easy enough to modify if needed, and it does the job.

Noweb style programming is a considerable change for me. My code is now migrating from lots of different files into one larger document, within which I’m writing the structure of the code in an easier, modular fashion. It’s not perfect, but I’m learning. The current question is why double dashes (as in decrements in C) are converted to single ones in print. The same thing even happens here in wordpress. Still, a few steps forward.

54321, a forgotten game pack

The other day there was some discussion of a 4-dimensional game in an IRC channel I frequent. This immediately led me to think of 54321, a collection of 5 games in 4, 3 or 2 dimensions for 1 player – it’s the first four-dimensional spatial game I played. So I looked it up again, and found no hint of its existence on the author’s site (apparently now Mac-dedicated). I’d expect at least a mention of why it was taken down, but the page is simply removed. The source is still available in various places (my own copy), though, and still works today. It requires SDL and SDL_Image.

Fossil: project management on the quick

Sooner or later, development projects need some revision tracking. Usually right about when you either need an experimental branch for a new feature or sharing the project, which would include releases. You’ll also need to document the work, and if you’re maintaining it at all, probably track issues. Even better if this can all be done publically.
Traditionally, all these tasks are done in central repositories with specialized tools – perhaps RCS (with descendants like CVS and Subversion), Bugzilla, and so on. They’ve been more or less difficult to set up and serve, which lead to services like Sourceforge, Github, and Google Code. There are tools to handle the combination, like Trac. Most of these work, and sometimes they’re just the thing – because you know you’ll want to share the project and spend the time to set up that infrastructure.
Other times, you’re just doing a quick hack. And then you give it to someone. And, two months later, you run into an indirect friend who’s using that same hack, with their own changes, and experiencing an issue you solved later on.. but the code has grown so much you can’t easily track down the changes needed, let alone figure out which release their version is based on.

We’ve seen a move lately towards distributed revision control, with the likes of Git, Mercurial, Darcs, Bazaar and so on. They can, and do, solve the issue of independent development – but only if people use them. Mostly that tends to get stuck on either learning how to use them, or having the tool available. The first is an issue mostly because each tool is different, and the second because they have varying requirements. This is not at all unique to revision control; people hesitate all the time to install software because of complex requirements and so on.

Fossil is a project management tool intended to solve some of these issues. It’s not necessarily best at anything it does, but it does it with a minimum of setup. It has a discoverable web interface, works as one program file, stores data in self-contained files, and offers revision control, a wiki, account management for access, and issue tracking. All set up at a moment’s notice, anywhere. Of course there’s a command line interface too.

I intend to use it for a few minor projects so I get a good sense of how it’s used. At this moment, the most nagging question is if it does anything like Git’s bisection (also available in Mercurial), which is very convenient when tracking down regressions.

OpenCL – now actually usable!

I’ve been experimenting a little bit with parallel programming, using a bunch of different interfaces – MPI, PVM, OpenMP, POSIX threads, parallel Haskell, Occam-π, and most recently OpenCL. I’ve also been looking at a few others, including XC and Spin. Of them all, OpenCL is by far the most promising when it comes to number crunching, for one simple reason – GPUs. It also has the advantages of being vendor neutral, C based, and openly published. The main downside would seem to be a lack of implementations, but it’s rapidly changing. It doesn’t by itself cover distribution over hosts (although nothing in the API prevents it), but it’s possible to combine with MPI or PVM, which do. If you only need CPU support, though, it’s likely easier to use OpenMP as it’s a more direct extension of C – and OpenMP programs reduce without modification to single threaded ones.
As for implementations, there are three big ones out for public use right now – Apple (in Mac OS X 10.6), AMD/ATI Stream, and nVidia (via CUDA). There’s mention of some others, of which the Gallium one interests me most as I am a free software enthusiast. The reason I’m writing this post is that I’ve finally been able to use nVidia’s implementation.
When I first looked into OpenCL, it was primarily to avoid the proprietary CUDA. I found nVidia did have OpenCL code in their GPU Computing SDK, but to my dismay, it was specific to an old driver, known to be buggy. I picked it up again because the most recent nVidia driver beta – 195.36.15 – contained new OpenCL libraries. With a bit of fiddling, this version actually functions on both of my computers that have a modern enough graphics card. There was just one snag while testing, and that is that OpenCL contexts must be created with a CL_CONTEXT_PLATFORM property. No really big deal, as I can just extract that from whatever device I find.

Here’s my simple OpenCL Hello World. It’s an excellent example of what sort of task you don’t leave for the GPU to do, as it’s a ridiculously small dataset and the code is full of conditionals while very low on actual processing. However, it does work, and has no extra dependencies. For some reason, that latter was one thing I didn’t find when looking about at examples. If you’re going to use OpenCL seriously, I suggest you check for errors and use something that can display them, for instance CLCC.