Processor affinity using Cygwin

I’ve been working on a Python script that takes a long time to run (about 2.5h), and as it was entirely single threaded I figured I’d bind it to a specific core, to reduce cache thrashing, enable clock boosting and such. I wanted a method that worked for arbitrary commands, or I’d use affinity package. The cmd start command always creates a new window, and I wanted the output in my existing shell session. My workaround involves using Powershell to set affinity once the process is running.

winpid () {
    # Find the Windows PIDs of specified Cygwin PIDs, using ps
    local pid=${1:-$$}
    ps -lp "$pid" | sed -ne "s/^. *$pid [ 0-9]\{16\} *\([0-9]\+\).*\$/\1/p"
    while [ $# -gt 1 ] ; do
        ps -lp "$1" | sed -ne "s/^. *$1 [ 0-9]\{16\} *\([0-9]\+\).*\$/\1/p"
setaffinity () {
    local bitmask=$((1<<$1))
    for wp in `winpid $*` ; do
        powershell -Command "[System.Diagnostics.Process]::GetProcessById($wp).ProcessorAffinity=$bitmask;"
withaffinity () {
    local affinity=$1
    "$@" &
    setaffinity $affinity $!

With these bash functions, I can run “withaffinity 3 somecommand” and have it moved to core 3 specifically.

Adapteva Epiphany parallel chip

I’ve written previously on the subject of parallel processing – mostly with a focus on microcontrollers. I’ve also noted that there’s a hole in the current offerings, with FPGAs being extremely fine grained and GPUs being specialized on massively parallel computations with the same essential program. The Zii Labs processors made me curious, the Green Arrays chips lacked the switching layer that is present in the XMOS and Transputer systems.. but we finally see a real contender.

I had the good fortune to talk to one of the people responsible for the development tools for Adapteva Epiphany, currently in a Kickstarter campaign for a computer named Parallella. This is the real deal – low power, high performance, and properly available documentation and tools. It’s not like Zii, where you can request an OpenCL implementation and never get a reply, nor like GreenArray where there’s only one possible programming language. This time there’s floating point and integer support, a unified memory system (although local memory is obviously the fastest), and somebody has prepared a board to get started! So what do we wait for? Only enough backers. Currently we’re short, and I for one have already signed up. Update: funding succeeded!

At a technical conference the first question was what the chip is for. In short, new applications; this level of performance in this efficient a package has not been available (to the public) before. I think a graphics card will still be the more efficient option for Bitcoins, but imagine a synthesizer musician no longer constrained by a local computer. Or a fully programmable camera capable of doing the trivial stuff – like lens correction and HDR imagery – on the fly. This is just the beginning.

Oh, and incidentally, it has one of the coolest FPGAs I’ve seen on a budget board, at the lowest price I know of (1/3rd of the next one). I may go into more detail on the architecture later on. 🙂

Parametric searches

If you’ve ever browsed the web page of a consumer equipment manufacturer, you’ve probably run into the frustrating experience of trying to figure out what makes one model different from another. I tend to go straight for the product specification pages, but sometimes it’s not enough – they may be missing, incomplete (hello, Fujifilm), incorrect or even intentionally misleading (hello, Samsung), or just plain unreadable. And even when the pertinent data is there, we often find one brand won’t measure using units used by another brand. This is where comparative reviews really shine – but inevitably, group tests don’t cover the precise items you’re considering. One thing that can sometimes help is parametric searches.

Continue reading

Around the corner – displays

I recently posted that I’ve ordered a so called E-reader. The particular features about that are all in the display; it needs to be readable in sunlight, and consume little power. That’s why the one I picked has an E-ink display. But as is the tendency of such things, there’s alwyas something better just around the corner – something not yet available on the market, or still just a bit too expensive. This post is a collection of the (hopefully) emerging display technologies I know of, with a focus on E-reader style displays.


I know, not really emerging when I’ve already ordered a third generation device, is it? Still, these deserve mention at least as a point of reference. The e-ink displays are purely reflective and stable, meaning any energy to display the image comes from light bouncing off it. Changing images, on the other hand, is a lengthy process frequently involving multiple passes (at least for grayscale). New driving chips has reduced this, but it’s still suffering from flashes to black and white. E-ink Triton was recently announced, and will add color capability.


Also on the market, and near identical to E-ink displays, SiPix claim to be the biggest in electronic paper modules. This is probably because they make custom models, only capable of showing specific segments. They do make active pixel matrix models as well.

Nemoptic Binem

This was the type I was mostly convinced to wait for. Of course, only days later the company went bankrupt, and I made my compromise. So what was so good about Binem? It came down to a few properties: Manufacturing technology as common TFTs, translucent bistable layer so it could be combined and backlit (demonstrated with OLED), and updates at video speeds without any flashing whatsoever. We never got to see what the cost might be, and I boggle at a world where this company could find investors hard to convince.

Pixel Qi

An active contender is the Pixel Qi displays. I’ve been wishing for one of these a long time, since they’re both full speed displays and feature a reflective mode. They’re proven technology, being a spin off from the OLPC project, yet they’re barely beginning to inch into the market. Currently they’ve launched a DIY kit for a few netbook computers, and we’ve heard of one tablet device that should have a Pixel Qi display. A peculiarity is that the display goes nigh monochrome when passive, which makes the subpixels more interesting for spatial separation, but this will only triple the resolution along one axis. I’d have considered making a non-square color pixel ratio to exploit the higher resolution better – video formats drop the color resolution anyway.


Probably the most promising development among passive displays, Mirasol uses the very property of light wavelengths to accomplish color. Not only that, it’s a bistable technology with very fast updates – although not yet at common video levels. This might have gotten me to wait, as there are now rumors that there’ll be an e-reader with their technology next year. Unanswered questions include what sort of resolution they’ve accomplished, as they’re literally bilevel and will require more subpixels per pixel to get gradual levels. On the plus side, this means they’ll be working on getting the resolution much higher than the intended pixel density, and I’m a sucker for true resolution. Let’s hope the controllers will exploit it.


This display is neither stable nor passive. Unipixel displays are pure active matrix, like today’s common TFTs, but operate with very rapid shutters. The idea is to perform like some color scanners, flashing whatever color base you need through the shutters (typically red, green and blue). This requires basically a distributed PWM controller, fast enough to give the desired color precision and wide enough to handle every pixel at once. It’s been done for OLEDs, but there you have the convenience to be the last visible layer. Basically it’s like a flat panel version of a DLP. The advantages of this method are that each pixel is placed in the same spot (no shifted subpixels), the shutter can pass more of the light than common panels, and since it starts with being fast there’ll be no ghosting. It’s interesting, but hardly the stuff to compete with stable displays on energy efficiency.


Another technology neither stable nor passive, but included for comparison. OLEDs use directly shining subpixels, which makes them very energy efficient for an active display; there simply is no backlight to be filtered. But it does suffer from the fact that all the picture detail has to be emitted from electrical power, meaning it’s impractical to compete with sunlight.


A few of these are used as e-readers. TFTs are divided into a few different technologies, and the Pixel Qi display actually belongs to this group – but most of the modern ones are limited to backlit operation. The legibility when the backlight is turned off is normally negligible. Apple’s iPhone and iPad displays are of the IPS type, which have better colors and viewing angles than most – I got an IPS panel for my stationary display. The cheap panels tend to be of TN type, which has horrid viewing angles – color tends to go nuts as soon as the angle is a bit off. This is what most laptops and TVs use, and you’ve surely seen the result when you tilt the screen just a little wrong. The same effect occurs when you get close – and since it differs for horizontal and vertical angles, you can’t expect decent performance if you turn to portrait mode. And to make matters worse, a lot of displays are built glossy or shiny. Great if you need a mirror, but that’s not the purpose of my displays.


A direct competitor to Pixel Qi, CPT have a sunlight readable display that doesn’t lose color quite as much when lit strongly from the front. I don’t know much more, as all I’ve seen are a few quick video clip of a fair booth, from netbook news.

I’m sure I’ve missed a few interesting ones. Please let me know which!

Playstation 3 debacles

A long time ago I bought a Playstation 3 console. I specifically hunted down the first model released here because of the alarming rate at which functions were disappearing from new models.
Features that had been removed at the time, but not on my model, included: Linux support, half the USB ports, memory card reader, and Playstation 2 compatibility.

In April, Sony abruptly decided that was not enough. They started killing features – indeed, key selling points – of machines they had already sold. Models they had not made for years. Suddenly, I was expected to sacrifice the computer functionality as well as the contents of the hard drive, without explanation. This was a change they had vehemently promised not to do after concerns were raised by the less capable slim models. My refusal led to the specific sabotage of other functions, including online gaming, new games, and even the ability to use credit I had already paid for to get expansions for the games I own.

As that last was their strongest money maker, I know the credit thus stolen from me is easily outweighed by what they don’t let me buy. From a purely economic standpoint it is an obvious loss. Yet the few replies (they systematically ignore contact) made insane claims like this being an improvement for all PS3 owners.

About a week ago, the first “modchip” for the PS3 appeared. I do not think it a coincidence that this occurred mere months after Sony performed this attack on customers – after years of no such modification being around. Sony have tried using legal means to restrict the distribution of the accessory, which is doomed to failure as it was replicated in a matter of days. It is now possible for anyone to make one. Unlike the feature Sony attack, the dongle is not specific to the early PS3s.

Yet in all of this, I’m still waiting to get back what was lost. The new dongle permits running independent code, but as yet I haven’t heard of anyone getting Linux to run with it. And that is not a feature up for negotiation.

Camera selection

As you may or may not know, my computer systems mainly run Debian GNU/Linux. Somewhat contrasting with this, I also follow some video services online, such as YouTube and Sadly they require Flash. Anyhow, I was inspired by several videos online and thought it would soon be time to post one of my own. This led me to search for a camera.

After much digging around, I settled for a Logitech QuickCam AF. After attempting to use it for a while, I can only conclude this was not the best choice. It has good sides – resolution up to 1600×1200, frame rate 30fps (up to 800×600), panning and tilting, UVC interface, JPEG compression and motorized focus. As simpler webcams go, it’s fair. But it doesn’t have true autofocus like the Vision Pro, or 30fps 720p like Microsoft LifeCam Cinema.

I found a few programs to cooperate with the camera, most notably mjpg-streamer, but it’s not very polished. I mucked about a little bit with javascript and wrote a simple page adding clickable panning to that as an experiment, but it needs the feature to tilt and pan in the same operation. It does have the advantage of needing very little CPU, and demonstrates working control of pan and tilt – which neither luvcview nor v4l2ucp do, in the packaged versions.

The most jarring problem I’ve encountered has to do with lighting. This camera wants light – lots of it. It defaults to solving this by raising exposure time, leading to awful frame rates and blurred pictures. On the other hand, I really do have a shortage of light, so manually lowering exposure (why are there two settings to make it manual?) leads to visible flicker and colour distortions.

In all, I find there’s a need for some better processing applications. Something to help control the focus, recode (as with ffserver), and target (either manually or motion detection). I’m aiming for video conferencing, so I’ll want audio with echo cancellation as well. On that note, the mono mic in the webcam is limited to 16kHz sample rate. It appears BruteFIR is the program for that, but how to set it up properly is a mystery.

Has anyone heard of any decent solutions?

Hello, world!

This is my first wordpress blog post, written on a Nokia E71 using Wordmobi. I have posted bloggish material before, about this phone, at linuxportalen. I might copy that over once I’m a bit more at home with things.
Right now I’m updating to Nokia Maps 3 (beta): the topographic map mode looks promising.