Writing Boring Code

I have bad news. Using Go as a shared library on Android 5.0 is simply not a good idea. In fact, it’s a horrible idea. Android 5.0 introduced a new runtime called ART. This replaces JIT compilation (Just-In-Time compilation).

If you don’t know, a JIT analyzes runtime heuristics and modifies the compiled program to run faster. JIT is one of the reasons you might see javascript outperforming C code on the DNA regex benchmark that’s been floating around for years.

ART actually precompiles dalvik bytecode to native code and runs that native code in its runtime. As a refresher, when you write java code for android, the java code gets compiled to java bytecode and then gets transpiled to dalvik bytecode to run on the dalvik virtual machine (so no one has to pay oracle any cash).

Tangent, there’s a new experimental compiler for android studio (based on intellij) that compiles java 7.x source directly to dalvik bytecode.

But, the ART runtime is much like the Go runtime, and the two fight over things. For the most part, it works and works well. But, when you begin trying to load native code such as PublisherAdView for ads that load in chromium webview that loads native code, bad things seem to happen.

It is essentially a no-go. For pure Go projects as is the initial target, this is a non-issue. But, for integrating Go into a normal Java app, this is a huge blocker. I’ve given up on using Go for normal apps.

I’ve actually stopped using Scala as well. I like Scala, and programming in a functional language (i.e. Lisp like) has really taught me a few things I appreciate greatly. There just hasn’t been much uptake from the other android developer on our team. He’s much older than me, in his 60’s. He doesn’t have anything bad to say, he just doesn’t pursue it or have much interest.

I really can’t blame him. It’s not just Scala, it’s anything that becomes too involved that’s not well documented. He dropped an email about fixing some SQL entries in a 2 year old word game I wrote but mentioned he wasn’t sure how to build the project.

The issue was two-fold. First, there’s five different versions of the app; google, google pro, amazon, amazon pro, nook. Second, two years ago Google didn’t have tools to help with this kind of stuff. As of maybe two months ago, they finally have stable tools to help with this.

I wrote a Makefile that would build all the apps and collect all the apk files into a single directory. If you’re running OSX or Linux, as most developers on our team are, then it’s trivial to build. If your running Windows, you’re shit-outta-luck.

Another tangent, I found a make.exe command for windows a year back for a browser extension plugin I wrote that worked pretty well, based on the work done by the git-for-windows team.

Still, me giving up scala is actually more like paying respects to Go. One thing I’ve really come to embrace with Go is being boring. Boring works. Boring is readable by people other than me. Boring is quickly buildable by people other than me.

One thing I’ve been considering is the golang/mobile repo added a Dockerfile. It’s really neat, boots up ubuntu, installs build tools, android sdk, go, gradle (used in android studio to build projects), and builds your source code for you. What’s neat is now Docker is available for windows. Even bootstrapping android studio and android sdk with necessary requirements can be a pain. But saying “did you install docker.exe? great just run this file” is really boring and works. This works whether you’re a developer or not.

Hell, even if I’m not writing go code, it works. It works for developers, it works for build servers, whatever.

So I’ve gone back to Java, I’m embracing the boring. I’m also typing “public static void” a lot.

I like Go like I like functional programming. It’s taught me many things and it’s certainly worth review.

Git Email Notifications on Push

So I’m doing a private collab and hosting a git repo off my server. Got annoying pretty quick with these occasional emails “hey pushed xyz change that could affect abc for you, make sure to pull the latest”

Enter this script, post-receive-email

To get going with it, I did the following, (line breaks for readability here):

wget -O '/user/local/bin/post-receive-email' 'http://git.kernel.org/?p=git/git.git;a=blob_plain;f=contrib/hooks/' 'post-receive-email;h=60cbab65d3f8230be3041a13fac2fd9f9b3018d5;hb=HEAD'

chmod a+x /usr/local/bin/post-receive-email

Next, link to this in your repo with ln -s /usr/local/bin/post-receive-email hooks/post-receive and add something like this to config

[hooks]
    mailinglist = "jane@email.com, john@email.com"
    envelopesender = no-reply@email.com
    emailprefix = "[GIT] "

And that will notify the mailing list of any push that occurs. There’s a number of other options worth exploring by reading the post-receive-email bash script.

Sublime Text 2 and multiple cursors

Two days ago I started using sublime text 2 for projects. I’m coming from a long sprint of VIM usage and I have to say that sublime is pretty awesome. In fact, I’d describe sublime as VIM + awesome. Unfortunately sublime isn’t free and not to be a glory taker, but I hope to produce something on par for free in the future.

One thing that really wow’d me into at least trying sublime was the proposition of multiple cursors. With that being said, this wouldn’t be my first foray into the lust for nonsensical features. But, only after a day of usage I can honestly say multiple cursors in a text editor is just a plain win.

Primarily, I’ve used VIM in the past for all but the most heavily-dependent-on-IDE tasks (namely, java). I love VIM. Just the other day I opened a 6.4GB mysql dump to make some minor changes by hand before passing it on to an AWK script for conversion to postgres (yeay 16GB of system memory), but sublime is, well, simply sublime.

Refactoring tools in an IDE typically consist of being scope aware and allowing you to rename method variables and class members. One thing no refactoring tool can touch though is editing multiple values at the same time. As I worked my way on the very first day through sublime on a python project, I found myself wanting to change a value set on five variables. Instead of a default value of None, I wanted it to (effectively) read kwargs.get('', None), and I thought, “ok! let’s try multiple cursors!”.

I moved into position and slammed ctrl+d five times and there they were, five cursors ready to alter the default value of five members.

Afterwards, I reflected on the wonder of how practical multiple cursors really were. “Is this just some cheap skate refactoring tool?”. No. It’s much more powerful. I think that’s one of the many reasons I’m a sublime convert now and hope to see this idea spread through free tools in the future.

Thanks for reading, try sublime, hell buy sublime, and feel the preposterous range for which it defines.

xmonad

During my lunch break yesterday, i ran across a couple comments on slashdot like “maybe emacs just works and thats why ppl are using it” and “tiling window managers are awesome”. So in my usual, screw it - lets see, i fired up emacs (not the first time) which ended pretty quick and I did some quick googling on a decent tiling window manager to try.

Enter xmonad, written in haskell no less (something I was going to learn after I wrap up my C# starter project, a calculator with a lexer so I can insert arbitrary words into a math equation, does proper math too, not just “eval(x)”). Anyway, within 20 minutes, I was sold. A tiling window manager is the best thing since sliced bread. Solves my window management issues I’ve been trying to fix since who knows when, things I’ve looked to for answers from desktop effects like scale, expose, overview of multiple desktops with drag and drop of windows, all that pretty flurry graphical stuff that gives you good feelings but then you rarely use.

Key Points: 
  • Focuses on little to no need for the mouse
  • The only window decoration is a 1 px border that changes color if it’s focused, you’d be surprised how much faster programs start without decorations
  • with no decoration, dragging a window involves alt-click, which is all I do anyway so low barrier to entry for me, but floating a window is rarely needed
  • you dont minimize, you tile so no need for decoration buttons
  • alt+shift+c closes any window - this could be seen as annoying but everything centers around alt anyway so it becomes natural quick
  • Navigating between tiled windows is dead simple, I don’t lose track of what’s opened.
  • alt+space swaps to different layout arrangements (3 total by default) which handle any use case I have
  • Moving windows between virtual desktops is easy
  • Use on a multimonitor setup is where things really shine. If my first monitor is desktop 1 and my second monitor is desktop 4, I can easily swap them by selecting desktop 4 on monitor 1. No need to drag something over or anything like that.
  • I can push windows around way to easy

I could probably type in all sorts of blurbs that might not really get the point across. If you’re bothered with window management, give xmonad a try and read up their quick guide to usage. Doesn’t take much time to get use to it.

I’ve honestly been putting off getting another monitor for my setup, i just imagined it being a pain to use but xmonad makes me feel like a master of window management and now I’m eager to get another monitor or two.

Fast Web Development with Damsel for Python

Time feels surmountable when looking forward. In retrospect, I see surmise.

My title to this post feels like a real plug piece, but I guess that's what happens when you're looking to get indexed. Recently, I tagged two new versions of dmsl in git. If you've never heard of dmsl, please go check out the README on github to see what it's all about.

https://github.com/dskinner/dmsl

I started off implementing haml in python which gave way to a unique direction for the project. I've been using it quite actively in work and am continuously enamored with its simplicity and power. Of course I wrote it and there might be a sense of bias, but this isn't the first or second or third or fourth attempt I've made to write something like this. Each of those projects were short lived in the same way that once they were mostly complete, they just flat out didn't "have it" for me.

0.3-stable is a wrap up of features and fixes over the course of many months of use and has, in the end, proven reliable. This tag is available on github. There were still some annoyances in the code related to speed and the dependency on lxml though that I badly wanted to fix.

The speed issue was solely in the use of a class inheriting python's Formatter to handle extensions to string substitutions. These substitutions alone could take three times as much of the time as the parser for the whole document. Granted, on a whole, this was still quite a bit faster then a number of template engines out there, but I felt this could be drastically reduced.

The second issue was the dependency on lxml. Simply put, I wanted that to go.

So recently I addressed these issues and tagged 0.4 in git. I've also made this tag available on pypi. If installing from pypi, the only dependency is a build environment and python headers for building the C extensions. If building from github, you will need cython 0.15.1+ installed.

0.4 also has another change with how context variables end up in the templates sandbox. Previously, these items were packed in a kwargs dict available to templates. In 0.4, this is no longer the case and those items are unpacked in the environment for use. Planned though is the ability to revert to the old behaviour as needed for upgrading older installs.

While I feel the changes in 0.4 are drastic, unit tests pass and I've upgraded my personal projects successfully to do a little stress testing while also using it in another upcoming public project with success.

So check it out and tell me what you think.

Line 6 UX2 and Linux

I originally wrote a bad/sad review about the Line 6 UX2 not working under linux as can be seen here: http://line6.com/community/thread/17663

I was looking to sell it recently when I decided to give it another look over and try out the drivers done for the PodXT devices by someone as can be found here: http://www.tanzband-scream.at/line6/

The 0.8.1 release failed to compile so i checked out the trunk of the development version

svn co https://line6linux.svn.sourceforge.net/svnroot/line6linux
cd line6linux/driver/trunk
make
sudo make install

To my surprise, it worked fairly ok. My line 6 ux2 device lit up and seemed ready to go. One of the first things I noticed was that there was a constant buzzing out of the right channel monitor. This was alleviated by simply plugging my guitar in. Next, I read through the docs to get some general info. I went ahead and fired up alsamixer, increased pcm output and zero’d out the monitor. Note: the docs say to set pcm output to zero as they can be much louder then the guitar monitor. I had no trouble with this when i increased output to 75, volume was normal.

So then I fired up jack and played around with a couple settings. To my surprise I was able to get latency down to 1.5 ms according to ardour (2.87 according to qjackctl)! Either way, this was much lower then I ever managed with my edirol which normally clocked in around 15ms and was littered with xruns that were very audible in the recording. Normally around 22ms I could achieve good sound with no xruns. After a short recording session, I noticed qjackctl had logged numerous xruns but I never heard a thing. Once during the recording, ardour disconnected from jack but this could be totally unrelated.

This is fine news indeed and prompts me to want to hold on to this device. The bad news is I have been unable to get the microphone inputs working. I haven’t had time to look into it fully but hopefully after exploring the line6linux docs some more, I can have some success with getting this working.

So all in all, this is great. What I would really like to see is Line6 spend a few resources on this project or doing something themselves. But that aside, hardware wise (i guess), this little bugger rocks!

js.js say whaa

whaa? oh, right. Well Ive dug in deep to javascript. Well sorta, and it was all in an effort to evaluate whether to use prototype.js or mootools. My choice? neither. Instead I’ll focus my energy on expanding the default object in my js.js file. I know, its totally unoriginal. But lets face the facts, its 488 bytes.

And hot damn its amazing what 488 bytes can do. You can check it out here: https://github.com/dskinner/js.js

Ive placed it in the public domain and what not. I think the focus will be mostly on custom constructors and lettings prototype() paste it all together. And of course, it can just be a staging ground for continued explicit prototypal declaration. It just uses whats there, and thats pretty sweet.

A New Way to Prototype with Javascript

Ive been doing alot of reading. One place that I keep happening upon is Crockford’s javascript pages. I dont reallly know who that is, but I’ve occasionally read he’s some sort of javascript legend. Well, I was looking over one of his pages, describing instantiating new objects, specifically: http://javascript.crockford.com/prototypal.html

He talked about prototypal behavior, how it should work, and his simple function for doing so. Effectively, creating a blank function definition and then setting its prototype to the passed in object. I suddenly realized something. Ive been going about it all wrong with this class() business. Furthermore, what I had in place was his function but on steroids. Instead of initializing a blank function, I have what i was considering a sort of meta object that links to magic methods. And instead of setting the prototype to the object passed in, I set it to the all the objects passed in. So I took this thought and made some minor changes to the code, including being able to pass in objects, not just functions. Now I can write stuff like this,

x = {get: function() { return this.url; }};

function y() {
    this.get_url = function() {
        return x.get.call(this);
    }
};

var super_object = prototype(y, x);

and now super_object has the methods of y and x, where the order of the arguments decides precedence of inheritance. So I can create a new instance by

var a = new super_object({url: '/test/this'});
a.get_url() // returns '/test/this'
a.get() // returns '/test/this'

There some behind the scenes action here. What I did was abstract out the meta object with the intention of overriding it. But on some more thought, I think I will simply make it explicit, this way one could define any number of meta objects with their own magic methods, or if it so fits, simply pass in a blank function() {}, following Crockford’s lead. So it would look more like

var super_object = prototype(y, x, meta); // or whatever you call your meta, or
var super_object = prototype(y, x, function(){}); // for no magic methods or special constructors

Speaking on speed, Its important to note that there are no call’s or apply’s ever, though thats not to say one couldn’t write it into a custom meta object. Point being, it runs fast, just as fast as typing it all out manually. It doesn’t strive to be “classical” in any way. It simply focuses on custom constructors for multiple objects and a simple way to combine multiple objects, given order of precedence of arguments. It could be just as easily used in conjunction with explicit prototypal declarations. Seems like a win-win for keeping it simple.

*** Edit *** I forgot to mention, in the above example, x is an object that super_object inherited, but say you override x’s method and then needed to call it? well its a good bit shorter since its already an object, simply

x.get.call(this)

No need for specifying the prototype. I think this could easily become a simple design paradigm I might follow, Ill need more experience to judge properly though

Javascript's not all bad

I gotta admit, through all the frustration and experimentation ive gone through with the language recently, its not so bad. At first I thought, “expressive? no way” b/c it seems like when you go to be expressive, you crop up with irreparable errors that eventually force you into one “expression”. Frustration follows each step, as is learning for me, yet I carry on.

Now Im feeling the chains loosened. Not so much do I feel tied to a particular paradigm when writing out bits of javascript, particularly, paradigms that Ive brought over from projects Ive worked on in python. Im looking at new ways to do things in javascript, in particular I was pretty happy to see getters and setters in javascript 1.5, Ive always enjoyed them for a couple reasons.

Getters and Setters provide a consistent API Alot of times, we find ourselves writing utilities and libraries of utilities to perform particular actions. When using these libraries (or someone else’s for that matter), its important to have a consistent API so one can think about the task at hand, not the details of the API. If you have a series of attributes on an object, but some of them might be dependent on others, getters and setters might prove useful for consistency in accessing object properties. If you have the following properties, parent, children, x, y, w, h, batch, visible, well keeping in mind to call get_parent() or batch() or whatever while calling .w and .visible for others really blows. All I can say is that I hope theres auto generated documentation to keep at hand, which will be a pain if theres long time periods in between using the library.

Getters and Setters provide a mechanism for error checking Another use I’ve found for getters and setters is error checking. It provides a centralized point to assert the value is something valid before getting or setting. So say I have

widget.x = input

Theres alot of places I could check input, but if Im getting input and setting x from different scenarios that crop up, being able to check it with a setter on x would consolidate alot of code.

Anyway, having access to getters and setters in javascript will be useful to say the least. Embracing the functional style of javascript looks like a win-win situation. Unlike others, I have no gripes with calling object.prototype.method.call(this) but even that is probably unnecessary in alot of situations, trying to fit a bull with a shoehorn.

Whats wrong with javascript prototype?

** Edit **You can see my current approach to these frustrations here: https://github.com/dskinner/js.js

O thats easy, ok lets write up a base object we’ll want to inherit from (note, I would not actually write a this.get_name to retrieve a this.name, duh, its all in the spirit of an easy read)

function A(name, age) {
  this.name = name;
  this.age = age;
  this.get_name = function() { return this.name; }
  this.get_age = function() { return this.age; }
}

Good, ready? oh shit, wait a sec, according to MDC we were being naive. Now the only way to call the get_name and get_age functions is with an instance of A. Well crap, that doesn’t help inheritance much, prototypal or not. Alright so lets do this right.

function A(name, age) {
  this.age = age;
  this.name = name;
}
A.prototype = {
  get_age: function() { return this.age; },
  get_name: function() { return this.name; }
}

Yeay, yippie for us, now lets extend A with B

function B() {};
B.prototype = new A;

Yeay, yippie, now lets make C with an extra param and function and extend A.

function C(name, age, title, money) {
  this.title = title;
  this.money = money;
};
C.prototype = new A;
C.prototype = {
  get_title: function() { return this.title; },
  get_money: function() { return this.money; }
}

Good? alright lets.. o crap! it doesn’t work. Right, right, thats right, I totally erased the A prototype by using the prototype = {} syntax, ok so that syntax is no good for working with inheritance unless its the top level thing, but crap, i dont wanna think about that. Whatever, lets just fix C for now though readability will be a little funky, hey .. i know! lets just say if i use the prototype = {}; syntax, thats a way to differentiate it as a top level parent! yeah! thats great justification cough*not*cough

anyway

function C(name, age, title, money) {
  this.title = title;
  this.money = money;
};
C.prototype = new A;
C.prototype.get_title = function() { return this.title; }
C.prototype.get_money = function() { return this.money; }

ok, lets fire this bad boy up and test C.. Where the hell is my name? and age!? son of a.., I see, so i need to manually call the constructor of the same damn thing that I C.prototype=’d

alright, its cool, lets fix it, I can dig it. Lets add that A.call

function C(name, age, title, money) {
  A.call(this, name, age);
  this.title = title;
  this.money = money;
};
C.prototype = new A;
C.prototype.get_title = function() { return this.title; }
C.prototype.get_money = function() { return this.money; }

Sweet buttery buttons! It works! wait a sec.. why is this code running slower then before? Yeah yeah, theres creating the name and age now that wasn’t before but its more… its .call() ! Wth?! why is this causing it to go slower? Alright alright, its cool, lets just… migrate these object property inits out of the contructor and into a prototype function that will handle the needs of all that inherit it since its mostly the same.

Ah hell, im hungry, maybe another day..

A Javascript Class with magic methods

Hey, ok so this is what i have so far, totally preliminary

function class() {
    var that = function() {
        this.__init__(arguments[0]);
    };
    that.prototype = new object;

    for (var x=arguments.length-1; x>=0; --x) {
        var m = new arguments[x];
        for (var i in m) { that.prototype[i] = m[i]; }
    }
    this[arguments[0].name] = that;
}

function object() {
    this.__init__ = function(kwargs) {
        for (var k in kwargs) {
            this[k] = kwargs[k];
        }
    }
}

Short, right?

Then I can write something like this,

class(A, object)
function A() {
    this.get_name = function() {
        return this.name;
    }
}

class(B, A)
function B() {
    this.get_age = function() {
        return this.age;
    }
}

class(C, object)
function C() {
    this.__init__ = function() {
        this.name = "OVERRIDE";
    }
}


var a = new A({name: "Daniel", age: "24"});
var b = new B({name: "David", age: "25"});
var c = new C({name: "John", age: "26"});

effectively just sticking a little header over normal javascript functions, and everything works as one would expect.

a.name // returns Daniel a.age // returns 24 b.get_age() // returns 25 b.get_name() // returns David c.name // returns OVERRIDE

And to boot, it executes at the same speed as writing it the “native” way. Here’s what i have for “native” (im a noob so correct any errors)

function object() {}
object.prototype.init = function(kwargs) {
    for (var k in kwargs) {
        this[k] = kwargs[k];
    }
}

function A(kwargs) {
    this.init(kwargs);
}
A.prototype = new object;
A.prototype.get_name = function() {
    return this.name;
}

function B(kwargs) {
    this.init(kwargs);
}
B.prototype = new A;
B.prototype.get_age = function() {
    return this.age;
}

function C() {
    this.name = "OVERRIDE";
}

I ran a test importing each implementation, respectively, and got similar results in execution speed and memory size. I created 100,000 thousand objects of each A, B, C and each method occupied 78mb according to top, and each method consistently ran between 2100-2300 ms with variance that occasionally hit 3000 ms. Ultimately its not surprising as all the class function i wrote does is auto write how you would do it natively. What Im surprised about is theres no extra cruft when the javascript runtime compiler handles it. I never intended this to be useful, it was all part of an experiment delving into javascript scope and messing with constructors so i could evaluate the use of a library like prototype.js or mootools.

But hell, so far this little bit of code is turning out to be fairly useful. I imagine if i write more magic methods, the memory size will increase by a small amount. I half expected to see a difference in memory since the C is much more stripped down in “native” version vs the version with init cruft from object function.

This has all been using spidermonkey-bin (smjs) so now im curious to see how other javascript implementations handle the details, as from the get-go i expected a huge increase in memory (not that I know anything about anything) from functions existing in the constructor and then being linked to a prototype, and all those “new” instances called in class. But it all seems negligible, in spidermonkey anyway. This could be a totally different story in IE, lol

for reference, heres my lame-o profile code (i know, i know, but it was enough to find all sorts of issues when exploring javascript scope and constructors)

var date1 = new Date(); 
var milliseconds1 = date1.getTime(); 

load('custom.js'); // point this to which script to test
var l = [];
for (var j = 0; j < 100000; ++j) {
    l.push(new A({name: "Daniel", age: "24"}));
    l.push(new B({name: "David", age: "25"}));
    l.push(new C({name: "John", age: "26"}));
}

var date2 = new Date(); 
var milliseconds2 = date2.getTime(); 

var difference = milliseconds2 - milliseconds1;
print(l.length)
print(difference)

EDIT Also, function object needs a class(object) so you can call its magic methods, so in C

this.__init__ = function(kwargs) { object.prototype.__init__.call(this, kwargs) }

Of which, im a little confused, b/c I originally expected to not work. Anytime a Class(X) is called, its constructor gets replaced, so another Class(X) later on will be referring to that replaced class which i thought would cause some kind of error, or so i would think. So deep inheritance might cause some bad mojo with the amount of memory or hell if i know. I haven’t looked into that yet

EDIT 2 Also, im not sure how much of a “class” this is really, if it turns out useful i may find a different name, maybe just call it “prototype” so like

prototype(A, object)
function A() {};
var a = new A({name: "daniel"});

Baffling Results from my Javascript Class(-ishness)

** Edit, regarding the following, I suddenly realized what the problem was, .call is slow and was used in the “native” approach. Nevertheless Ive had positive result following through on multiple inheritence and magic methods. refer here, http://wp.me/piHZk-14 **

Ok, So recently I’ve taken an interest in javascript. By interest I mean taking it more seriously as a language. One of the first things I wanted to do was to see if I should adopt a framework like mootools to allow for classical inheritance type of stuff or if I should develop using the prototypal javascript inheritance. This lead me to really dig in deep to javascript scope and all of its nuances, especially considering prototype.

Along the way, pecking away at my smjs console (aptitude install spidermonkey-bin), I eventually wrote this function class() {} as I was trying to see what I could get away with in poking at the scope of functions and their prototypes. I was particularly annoyed with a seperation between the constructor and that which was prototyped and the foresight required, which Im going to lack since Im new to the game. Anyway, heres the function,

function class() {
    var that = arguments[0];
    for (var x=arguments.length-1; x>=0; --x) {
        m = new arguments[x];
        for (var i in m) { that.prototype[i] = m[i]; }
    }
}

effectively, this allowed me to write my prototypes in the constructor as well as extend functions. It was all in the name of learning and I wasn’t considering it practical. So basically I wrote stuff like

function object() {
    this.init = function(kwargs) {
        for (var k in kwargs) {
            this[k] = kwargs[k];
        }
    }
}

class(A, object)
function A(kwargs) {
    this.init(kwargs);
}

var a = new A({name: "Daniel", age: "24"});

I was also playing around with object constructors (unsuccessfully) curious as to if i could implement magic methods that could be inherited and run automatically, but yeah, that went no where so i was all but about to abandon this whole excursion when I decided, before I do, I wonder how much more memory my function class() {} uses and how much slower it is from doing it the standard way. By standard, I mean what I basically learned from perusing the net and from MDC javascript 1.5 Engineering Model Example. Heres what I have for the “standard” way

function object() {}
object.prototype.init = function(kwargs) {
    for (var k in kwargs) {
        this[k] = kwargs[k];
    }
}

function A(kwargs) {
    object.prototype.init.call(this, kwargs);
}

var a = new A({name: "Daniel", age: "24"});

Now my profiling isn’t very scientific I suppose, I used top and timed the execution from within javascript, but the results are consistent. What I basically did was this

var date1 = new Date(); 
var milliseconds1 = date1.getTime(); 

load('test2.js');
var l = [];
for (var j = 0; j < 5000; ++j) {
    l.push(new A({name: "Daniel", age: "24"}));
}

var date2 = new Date(); 
var milliseconds2 = date2.getTime(); 

var difference = milliseconds2 - milliseconds1;
print(l.length)
print(difference)

where load(‘test2.js’) was the “standard” way and load(‘test4.js’) in a seperate file was my way. The first thing that caught me off guard was that memory consumption was the exact same. I was half expecting my method to take more memory b/c the function definations existed in two places, but I guess the javascript runtime compiler doesn’t cause this to happen, so yippie freaggin do da. Now what left me baffled was that my way was consistently faster then the standard way. Here are the time results, running 10 in a row

  • all times are in milliseconds === Standard === 54 58 49 55 55 53 53 49 53 52

=== My Way === 42 41 42 45 45 42 42 48 48 48

The numbers were close, so then i decided to increase the number of objects created to perhaps provide a more significant and visible difference. So I increased the number of objects from 5,000 to 500,000.

=== My Way === 5716 4257 4229 4331

=== Standard Way === 7601 4866 4913 5564

Its as if the javascript compiler runs faster instantiating an object property and linking it to a prototype then it does when just instantiating a prototype property. And it doesn’t require any extra headway in memory to do it.

If theres any javascript ninja’s that can explain whats going on, thatd be simply awesome. Speaking of which, im gonna go find mailing list now…

Motionbuilder from AutoDesk and OpenCV in Harmony

For anyone ever curious, recently I have delved into the world of motion capture, image recognition and the likes for the past week. The task, build a headset with a mounted usb camera that tracks the eye and moves it accordingly to a model in motionbuilder. Just to get this out there, I know nothing about 3D art, modelling or anything of the likes, including motionbuilder. This would be the first time I’ve ever even used the software.

That aside, I asked someone for a model with a moveable eye and what i got was basically a head with the eyes fixed to a null point. when the point moves, the eyes move. So next was to get the data I had been receiving from opencv to motionbuilder. Just to clarify, currently I am simply using a default haarcascade for face detection so as to track something with motion on the screen. Im taking the center point of the face and using it as proof of concept to move the null. Books are expected Monday so i can delve into tracking the pupil.

The thing about motionbuilder though is its python implementation is hooorrible. As in the worst of the worst. If i were to say its barely usable, i might be hitting the head on the nail for some, but it still feels like a bit of an overstatement. That said, its still great that it has python support so kudos for someone atleast trying to implement it. Im sure its a daunting task for the type of project.

Let me briefly describe the limitations for any that might be unfamiliar. For one, the python version used is 2.4.1 and it comes with pyfbsdk library and nothing else. Considering the lack of documentation for the python module, i would typically resort to something like,

import inspect
for x in inspect.getmembers(FBSystem().Scene):
    print x

but as i said no libraries. So the first thing i did was go to the python site and download 2.4.1. Its not actually listed there but just click on whatever the latest 2.4.x is and then change the revision number to 2.4.1 in your address bar. Download, install, then copy over from your c:\python24\Lib directory all the .py’s to your program\ files\autodesk\python\lib folder. Now you can you do some basic stuff like inspect.getmembers. Secondly, the python console in motionbuilder is horrid. You can type in no more than one line at a time. Syntax error’s have crashed the console. I cant up-arrow to previous commands. The output is limited to whatever the last command was that you ran, aaaaand the text of the output isn’t selectable. So that means no copy and paste of all the methods of whatever after you inspect.getmembers… aaargh! I still have the screenshot on my desktop somewhere …

But fear not! b/c there is telnet. From the python console from within motionbuilder, there is a tab that lets you enable telnet. So you can open up a telnet client and go to addy 127.0.0.1 port 4242 and hopefully you should be presented with a python console. I say hopefully b/c i didn’t have the best of luck the first few times b/c of motionbuilder (and/or my own) quirkiness.

And finally, the real bugger in it all is that if you write a python script that takes some time to run, all of motionbuilder locks until the script is finished. So this means no script thats running in the background waiting to receive data. Instead, this data needs to be sent via the telnet link to motionbuilder. I found some great resources, but mainly ill list this particular one,

http://chrisevans3d.com/tutorials/mbui.htm

He’s got some great sample code for integrating a seperate wxpython script into motionbuilder and breaks down a number of things as well. Unfortunately, his code for making use of telnetlib from within a seperate python instance to issue commands to motionbuilder didn’t work out so hot, which is precisely what prompted me to write about this. The code he had listed seemed a bit cryptic with these read_until’s with params of 0.1 and 0.01, and i didn’t see anything of the sort mentioned in python docs (barely looking of course) so I wrote my class for doing this and saved it in a mbpipe.py and it reads as follows

import telnetlib

class MBPipeline:
    def __init__(self, host="127.0.0.1", port="4242"):
        self.tn = telnetlib.Telnet(host, port)
        self.tn.read_until('&gt;&gt;&gt; ')

    def call(self, command):
        self.tn.write(command + '\n')
        r = self.tn.read_until('&gt;&gt;&gt; ')[:-6]
        try:
            return eval(r)
        except:
            return str(r)

Now from my script where i am doing opencv stuff (or simply from your console) I can do a

from mbpipe import MBPipeline
mb = MBPipeline()
mb.call(‘FBSystem().Scene.Components[216].PropertyList.Find(“Lcl Translation”)‘)
and what it returns is the actual tuple from motionbuilder. In any case where the string from the telnet session can be eval’d, you’ll receive the object. Otherwise just the string.

Just thought I’d share. :D

I may comment later on my experiences with opencv, which so far have been great. QueryFrame, haarcascade, convert to image and push over to pyglet and render to screen my live video (note, opencv has its own windowing and controls which most will probably find useful).

Pandora minus the cruft with XUL

I made this a while back then found out a couple months ago that pandora has a ?cmd=mini getvar that shows a smaller player. So I updated this xul package I made. Im no expert or even novice in xul. Very basic stuff is all I’ve cared to do, but i figured this was a great way to use the service minus the cruft and get it out of my browser. Google Chrome’s save as application is great too if your on windows.

Pandora via XUL can be downloaded from this link: http://dasacc22.googlepages.com/pandora.tar.gz

This should work on any platform with xulrunner. You can run it from linux by

$> xulrunner application.ini

in the folder, similarly on other platforms. Ive also created a sh shortcut in the folder that runs

$> nohup xulrunner application.ini > /dev/null 2>&1 &

to background the service. When starting from the shortcut, click “run” to start it. I have adobe flash complain on startup saying it prevented something dangerious from happening, i guess b/c the flash object is embedded in XUL?? And that it shutdown the offending application. Just click ok and it runs just fine. No need to click settings like it prompts you to (it doesn’t seem to launch settings anyway).

If the ?cmd=mini ever dissappears you can just update the xul package by opening chrome/content/main.xul and replacing the embed object with the one from the site.

IBM Sliding Puzzle Contest

So someone passed onto me a pdf for an IBM sliding puzzle contest. Basically, it consists of a 3 row by 3 column puzzle with one empty space, you know the ones, and you have to write a piece of software that solves for the answer. The instructional pdf suggests that while its acceptable for your answer to be over 20 moves, it should optimally be about 20 or less and be relatively quick.

At first I had no clue about how to do something like this but found the idea very interesting. Four hours later I had a python script that solves for all possible solutions up to how ever many moves you choose it too. Turns out the shortest answer is 12 moves, according to my script. I haven’t validated the 12 move answer, but i did validate a 14 move answer by hand with success (which was actually a twelve move answer with a repeat move making it 14), and i see little reason for the 12 move answer to be wrong.

Anyway, here it is in all its glory

puzzle = ['0', '4', '2', '5', '8', '3', '1', '7', '6']

answers = []

possible_moves = [
    [lambda p: swap(p, 1, 0), lambda p: swap(p, 3, 0)],
    [lambda p: swap(p, 0, 1), lambda p: swap(p, 2, 1), lambda p: swap(p, 4, 1)],
    [lambda p: swap(p, 1, 2), lambda p: swap(p, 5, 2)],
    [lambda p: swap(p, 0, 3), lambda p: swap(p, 4, 3), lambda p: swap(p, 6, 3)],
    [lambda p: swap(p, 1, 4), lambda p: swap(p, 3, 4), lambda p: swap(p, 5, 4), lambda p: swap(p, 7, 4)],
    [lambda p: swap(p, 2, 5), lambda p: swap(p, 4, 5), lambda p: swap(p, 8, 5)],
    [lambda p: swap(p, 3, 6), lambda p: swap(p, 7, 6)],
    [lambda p: swap(p, 4, 7), lambda p: swap(p, 6, 7), lambda p: swap(p, 8, 7)],
    [lambda p: swap(p, 5, 8), lambda p: swap(p, 7, 8)]
]

def serialize(p):
    return ''.join(p)

def swap(L, m, t):
    if L[t] is '0':
        L[t] = L[m]
        L[m] = '0'
        return serialize(L)
    
def no_dups(S):
    if len(S.split("-")) != len(set(S.split("-"))):
        return False
    else:
        return True

def search(tree, index=0):
    if index &lt; 12:
        for each in tree:
            for i, val in enumerate(list(each.split("-")[-1])):
                if val is '0':
                    generation = []
                    for move in possible_moves[i]:
                        result = each+"-"+move(list(each.split("-")[-1]))
                        if "123456780" in result:
                            answers.append(result)
                        elif no_dups(result):
                            generation.append(result)
                    search(generation, index+1)

search(["-"+serialize(puzzle)])
shortest_answer = sorted(answers)[0]
print "=========="
print "Shortest Answer: " + str(len(shortest_answer.split("-"))-2) + " Moves"
print "++++++++++"
print shortest_answer

Edit: I just thought i would add, yeah so the thing that was the turning point for solving this thing was how you look at solving the puzzle. Ive always looked at those puzzles as, ok what can i move into the empty space and rotate these pieces around and this and that but thats totally the wrong viewpoint. The way to visualize solving the answer is to look at the empty space as your focus and to move the empty space around, pushing the other numbers around into place. The puzzle suddenly becomes eaiser to solve by hand on your own and thats how the program solves for the answer too, by moving the empty space around, not trying to calculate how to get a particular number to its destination

Google Chrome, OS', and Web Development

Ok so everyone, their mama, and her pet hamster have written an article about google chrome. Ive also read an interesting article that some-what theorizes on a google os based on chrome in the distant future. Only interesting b/c of what I have been planning to do.

First off, I read about people complaining over initial memory consumption of Google Chrome. I dont know why but I feel a need to state my opinion on the matter. What I care about most is a responsive system and chrome does that, short and simple. If i were to liken chrome to something id say starting an instance of google chrome is like typing

$> ls

at a command prompt. Its freaggin fast. Period. Running chrome nay interferes with most anything else i do. That is using it on a core 2 duo laptop with 2 gigs of ram and a amd64 desktop with 1 gig of ram. Just a quick statement of facts. Anyway, im not really interested in the distant future, what i AM interested in is the possibility of now. Honestly with very few modifications i would use chrome as a full shell replacement, on both linux and windows. Here’s my wish list,

* A better file manager
* the ability to launch local programs from the address bar

Ok so thats all i have for the most part. Frequently used programs, I could simply bookmark. Launching them from the addy bar would be very similar to what Ive always enjoyed doing on linux using the likes of katapult in the past and gnome-do nowadays (which does alot more interesting stuff). Ubiquity is a mozilla project that has a similarly driven concept of a type-into pop console to accomplish tasks (a little more complex than just launching a program). To accomplish this my local computer would need to be indexed or the standard entries for programs installed would need to be, but id prefer the first so i can search my computer for a file

local:[search-term]

anyone? I think alot of this couldn’t really be accomplished without patching some code (mostly the use of the addy bar).

As for the file manager, the builtin file manager is of the likes of firefox. just a point and click and launch scheme. An actual robust file system could be written as a local webapp. Chrome is like a staging ground for writing a new breed of applications as I see it, I actually just wrote something today to display an index of my movie collection and made a shortcut with chrome, works beautifully. Yes it runs in firefox and anything else for that matter, but would i ever use it in such? likely not b/c when i want to watch a movie, i want to click an icon and i WANT it NOW. I dont have all day to wait for something to start up. Well I do, but its a real buzz-killer.

Ok enough ranting. I am looking at chrome as a means to develop desktop centric applications (one of which is a music app based on the likes of the sndobj library that would allow multiple people to mix at the same time) and I think the best first place to start is to write a file manager and be able to bookmark applications i use and also index my local drive so i can search it. the urls wont be the prettiest with accessing 127. but it will do for now. Then on X startup or on windows startup, chrome is launched instead of gnome or kde or fluxbox or my beloved openbox which has always come to save my day in one way or another, or explorer.exe

Those are my ideas, I will be starting some of them soon and developing in python/cherrypy (unless theres due cause for something else) and then looking into investigating how to create a windows service. If anyones interested in collaborating, feel free to contact me.

Ableton Live, Linux, and Wine

Well, a title doesn’t get much more concise then that I guess. For anyone ever interested in running ableton live in a linux platform of their choice via wine, it has been disheartening to say the least. Mainly due to a direct draw issue thats been around for a while. Well a couple days ago the issue was resolved with wine 1.1.3 and ableton is now very usable! I installed Ableton Live 6 on my laptop after getting wind of the possible fix, started the app and immediately noticed it was copying files to the library! Normally ableton dumps the dialog here, doesn’t copy anything, and presents you with an empty “this is an evaluation” blah blah blah empty dialog where i have to click an invisible button. Well long story short, its working and working very well.

There are a couple of issues ive noted thus far that i need to open bugs for or find bugs already opened for. One, when i arm a track for recording in live, it prompts me with an error message saying it can’t open such and such a file for writing. Also while working with a simple project of dragging in some songs and doing some loops everything went fine, with a more complex project of dragging in loops, samples, and other fun stuff and throwing in some midi sequences and effects and more fun stuff, well.. everything went great! and then i saved it, and then later i went to open it and it couldn’t find a file related to the project, but it was looking somewhere i wouldn’t normally expect ableton to look (i dont think..). If i recall correctly, im pretty sure ableton live records loops and junk into a folder of the name of the project, but instead it was looking for a file from my ~/Samples/Recorded/ folder. I dunno maybe thats right, but that folder is empty and the file its asking for i cannot slocate on my hard drive so i dunno ..

Once these quirky file issues go aside and i can save complex projects without any hitches, id probably give it a platinum status b/c it runs damn smooth.

Edit: Just to note, im using Ubuntu Hardy 8.04 and the repo’s from winehq.org

Project Eulier and XSLT

Well when i came across Project Euler and saw the first problem, i naturally did what anyone in their right mind would do … solve it using xslt

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    <xsl:output method="text"/>
    <xsl:variable name="iterations" select="1000"/>
    
    <xsl:template name="sum_multiples">
        <xsl:param name="i">0</xsl:param>
        <xsl:param name="incrementer"></xsl:param>
        <xsl:param name="result">0</xsl:param>
        
        <xsl:choose>
            <xsl:when test="$i < $iterations">
                <xsl:call-template name="sum_multiples">
                    <xsl:with-param name="i" select="$i + $incrementer"/>
                    <xsl:with-param name="result" select="$result + $i"/>
                    <xsl:with-param name="incrementer" select="$incrementer"/>
                </xsl:call-template>
            </xsl:when>
            
            <xsl:otherwise>
                <xsl:value-of select="$result"/>
            </xsl:otherwise>
        </xsl:choose>
    </xsl:template>
    
    <xsl:template match="/">
    <xsl:param name="three">
        <xsl:call-template name="sum_multiples">
            <xsl:with-param name="incrementer">3</xsl:with-param>
        </xsl:call-template>
    </xsl:param>
    <xsl:param name="five">
        <xsl:call-template name="sum_multiples">
            <xsl:with-param name="incrementer">5</xsl:with-param>
        </xsl:call-template>
    </xsl:param>
    <xsl:param name="dups">
        <xsl:call-template name="sum_multiples">
            <xsl:with-param name="incrementer">15</xsl:with-param>
        </xsl:call-template>
    </xsl:param>
    <xsl:value-of select="$three + $five - $dups"/>
    </xsl:template>
</xsl:stylesheet>

the same thing in python could go something like

three = [x for x in range(1000) if x % 3 is 0]
five = [x for x in range(1000) if x % 5 is 0]
print sum(set(three + five))

Edit: Im still learning python, well after posting this and on the ride home i realized i could define my list with one line.

print sum([x for x in range(1000) if x % 3 is 0 or x % 5 is 0])

CherryPy, SQLAlchemy, and URI to SQL method

Ok some basic info first. I have sqlalchemy objects defined. I use cherrypy and have been writing methods for doing queries to my tables in the various means ive learned how over the past couple years. that is to say,

/show_actor?id=4&production_id=2

I keep looking at this code and thinking, this just isn’t right, i was even recently looking at the turbogears2 wiki example (getting ideas for decorator usage and such) and seeing similar code, Again thinking, this just isn’t right. Then somewhere between thinking and writing some code, i ended up halfway through the following and dont know how i got there. Basically its a uri to sql method. I make use of cherrypy’s default method to traverse the uri that gets passed as *args … well traverse it as in translate it into traversing my sql tables. Its all very short and concise i think.

I got the first draft of my uri to sql done, it goes something like this,

@expose
def default(self, *args, **kwargs):
    for n in args:
        if n in globals():
            query=Session.query(globals()[n])
    for k,v in make_filters(args):
        query=query.filter(globals()[k].name==v)
    yield json(query.all())

and uses this outside function

def make_filters(uri):
    for n in range(len(uri)):
        if not n%2 and n+1 < len(uri):
            yield [uri[n], uri[n+1]]

Basically now, the args passed to my default method let me traverse my sql tables like it was a directory tree. So you might say, i can open my folder Productions and find “aa” in there, and then i can open my Capture data related to “aa” and find MC001, or go to my Actor folder and get all my actor files such as “Jon” all accessible via

/show/Production/aa/Capture/MC001

or

/show/Production/aa/Actor/Jon

then using keyword arguments like

/show/Production/aa/Actor/Jon?type=json

i can specify special conditions or whatever and ive totally eliminated this really lame reiterative code of multiple methods for calling different data like /show_actor?prod_id=1, or calls with multiple if statements for handling what im asking for and its totally recursive no matter how deeply nested my tables are.

Im sure theres tons of problems and some shortsightedness here, and im still a relative newb to python (going on 6 months of use now i think) but anyway, i really dig this. Now I can start coding my ajax app to grab this or grab that and as I define tables through sqlalchemy, i will automatically be able to access my data.

Some things I need/want to do now is to provide custom filter options via keyword arguments and some sort of deliver data as type .. w/e, json, xml, yaml, yadda yadda yadda

Also to note, i call a json method in my code, thats part of a (probably overly complicated) generator i wrote that takes an sqlalchemy result object and produces a usable dict that gets dumped to json. It needs some extra work I think to be more useful but does what i need it to for now. Heres the code for that

def gmap(obj):
    for item in obj.__dict__.items():
        if item[0][0] is '_':
            continue
        if isinstance(item[1], unicode):
            yield [item[0], str(item[1])]
        else:
            yield item

def json(obj):
    if isinstance(obj, list):
        return simplejson.dumps( map(lambda x: dict(x), map(lambda x: gmap(x), obj)) )
    else:
        return simplejson.dumps( dict(gmap(obj)) )

Thanks to google’s advanced python talk on video.google for breaking down some key concepts on generators and such.

Wordpress, Blogspot

Ive only recently started blogging and there were two choices that came to mind. Wordpress, which ive heard about and a rather famous blogging util, and blogger, which i once looked at many years ago and also integrates with my other google services. I figured i would give both a go and after a few hours of accumulative use, I decided i liked blogger best as i accomplished what i wanted more quickly. Unfortunately i noticed that nothing i posted in blogger showed up as a google search result, whereas my post in wordpress would show up as a first result, when typing in two keywords, overnight! Since my utmost main concern is access to my meager blog, I will be discontinuing my use of blogger but leaving it up incase it was indexed by some other service. Here is a Link to both blogs

http://dasacc22.wordpress.com http://dasacc22.blogspot.com

CherryPy, SndObj, and SVG

Ok so sometime back when i first started tinkering with pysndobj, i was toying with some ideas for a user interface. Primarily i come from a web background and i decided to toy with the idea of a web frontend to a sndobj thread(s) that would write to an output stream that multiple people could connect to to work collaboratively together semi-realtime. Yeah, it sounds a bit amibitious but anyway I decided to take the time to toy around with SVG as well, which ive never worked with, to see what I might come up with.

Ultimately, I got this, cpsndobj.

This is the source code to my simple cherrypy/sndobj/svg demo. Basically I have an svg knob that i made that you can manipulate by clicking and dragging the mouse down on it. Unfortunately with a browser you cannot lock the mouse down in place so it can seem a bit odd if you reach the edge of your screen. But anyways, tar -xzf this bad boy and ./python cpsndobj.py and you will find a local webserver at http://127.0.0.1:8080/. Of course you need to sudo easy_install-2.5 cherrypy if you wanna use it. The rest is javascript(jquery if i remember correctly(been a while since i looked(i think this is why i like python, avoiding all these tags(woot!)))). And I believe I have it set to connect to a jack server so you’ll want to change it to SndRTIO if you want otherwise or start your jack first. After you get it up and running, visit the local address /on to turn on the modulating frequency the visit the root of the site to display the svg knob. Now click and drag it. and youll see it do its thing. Things to note, if your using internet explorer, you’ll need to install the adobe svg viewer (though ive not tested it in IE). Here in firefox land, we apparently like things to run slow so if youd like to experience a smooth svg knob .. experience, then get opera installed and notice the difference.

Uh but yet, i dont use opera past playing with an svg knob and playing flash movies that dont lock up my browser in linux.

One might say, ultimately i basically have a tinker toy thats ok (woot!).

–Edit: I suddenly realized the fallacy of my “i like python” statement above, python still has paranthesis.. duh..

SndObj, Jack, and Inputs

Ok so I have been tinkering with pysndobj on and off for a while. One thing I have been wanting to do is get it setup where i have a thread using SndJackIO and doing a line in from my edirol ua-25 usb soundcard with my guitar. I couldn’t find any documentation on how to do it with SndJackIO for a while, though there was plenty for SndRTIO. But then i noticed a reference to core audio on mac and doing inputs. Effectively, there is a single SndJackIO for both input and output. So when instantiating for such just do a

outp = sndobj.SndJackIO("MyName")
inp = outp

Anyway, here it is in all its glory

import sndobj

jack = sndobj.SndJackIO('test5')
inp = jack

snd = sndobj.SndIn(inp, 1)
cmb = sndobj.Comb(0.001, 0.001, snd)

jack.SetOutput(1, cmb)

thread = sndobj.SndThread()
thread.AddObj(snd)
thread.AddObj(cmb)
thread.AddObj(inp, sndobj.SNDIO_IN)
thread.AddObj(jack, sndobj.SNDIO_OUT)

thread.ProcOn()

–Edit: Above, cmb is a filter that is not needed. You can simply SetOutput to snd instead of cmb and bypass that.