Wednesday, December 31, 2008

Predictions for 2009

I'm not usually one to make predictions. It's hard for me to tell the difference between a prediction and wishful thinking. But this article over at the Inquirer (still the best place to get an honest take on the industry along with /.) got me thinking about a couple of things I think are going to be important in 2009. So here we go...

2009: The Year of the GPGPU

This is more a continuation of a trend but the Inq article made some great points that I think will put some spotlight on general purpose programming with GPUs. The key one, is the recent standardization of a cross platform way of programming these things, OpenCL. ATI and nVidia have already signed up to provide OpenCL support for their chips and look for Intel's Larrabee platform to come with the same. I think there is still some software and hardware architectural things that need to be done to make GPGPU more efficient and easier to program. Look for LLVM (which needs an article on it's own) to play a role, as it already is with OpenGL, and look for one of the chip vendors to put a GPU on the memory bus shared with the CPU and make these things sing.

2009: The year of WebKit

Ok, yes, I'm playing it safe with these predictions. WebKit is already the base for Apple Safari, Google Chrome, and a host of Linux based browsers, so it already has a ton of momentum. The reason I think WebKit is going to the next level, is first of all the top of the class performance of it's new JavaScript VM (and I can't imagine why Google would continue with V8 in Chrome). But also, I am impressed with how easy it is to create your own WebKit based browser, and how easy it is to create a Linux based platform that uses WebKit as it's front end (launch X, launch a simplified WebKit shell in fullscreen, done). I expect to see a lot more mobile internet devices built this way. At the very least, it gives a reason for embedded developers to care about AJAX.

C++0x won't be C++09

I think that's a forgone conclusion but no one really wants to admit it yet. But look for the vote to finish this year at least. C++0x will be an exciting evolution of C++ into the next generation. No it doesn't have garbage collection, yet, but it does have smart pointers that do the job better if you use them right. C++0x makes it easier to do a lot of things, and the introduction of closures and lambda functions and expressions will breath some life into this stalwart of the software engineering community.

Well, that's it for now. If I think of more over the next couple of days I'll post them. There are a lot of things I hope will happen, but i'm not sure they will. But one thing is for sure, open source is here to stay and is becoming a core business model that companies still need to understand and learn to use effectively and I will continue with my work with Eclipse and Wind River to help figure that out and spread the word.

Have a safe and happy New Year! See you on the other side.

Monday, December 29, 2008

A look at WebKit

A few days ago, I was playing with Google's V8 JavaScript VM library and got it compiling with MinGW in Wascana. I submitted the patch to make it work but I haven't heard back. I guess it could be the Christmas break.

But one thing that struck me odd recently was an announcement that the next rev of Android would include WebKit's SquirrelFish Javascript VM. I guess that shouldn't be too surprising since SquirrelFish comes with Webkit. But then why is there ARM support (the CPU for Android) in V8? And if they are using SquirrelFish for Android, why don't they use the souped up SquirrelFish Extreme for Chrome? Especially since there are benchmarks showing it beating V8. I'm confused and can only chalk it up to Google being a big company and maybe the Android people don't hang out with the Chrome people.

Anyway, that got me looking into this whole WebKit business. I downloaded the latest nightly source build to my Debian Linux VM and after installing a boat load of packages needed to build it, I built it. I had heard the JavaScriptCore library which implements the VM was embeddable in C++ apps. The header files are there, but it looks like you actually have to embed the whole WebKit library to get at the VM.

That got me thinking back to an earlier idea I had. Use HTML with JavaScript as your main GUI framework. With Webkit, you can embed the whole browser into your application, and you can hook up new JavaScript classes to your C++ classes to provide scripting and to give access to them to the UI. Interesting to see how that would work in action.

I think I'm starting to figure out this whole JavaScript and C++ thing, with thanks partly to something a commenter said on a previous entry. Use scripting for quick turnaround, when you want to whip up a prototype or allow for easy extension of functionality. But use C++ for areas where you need to engineer functionality. Part of your architecture design is deciding what that means. And maybe something like WebKit might be the right platform to get you off the ground.

Saturday, December 27, 2008

VirtualBox 2.1 and assorted Christmas Fun

Just some random thoughts on this Saturday after Christmas. My family and I had a good Christmas, despite a little "Fun with Autism" moment with my Autistic son, but it's all better now (patience is a key survival technique in our household). Yesterday was Boxing Day in Canada, which is a holiday here despite all the stores being open for your shopping pleasure. If you don't feel like going out, you are free to sit around, well, like boxes, which we did for the most part.

I'm spending a little time today while everyone is playing on the PS3 and various PCs around the house getting ready for my EclipseCon tutorial. I'm really looking forward to it. By the end of the tutorial, you'll walk away with Wascana which you use to build qemu, a little Debian Linux image running in that qemu, and a cross-compile toolchain and CDT integration that you also get to build to create apps for Debian from Windows (and maybe Linux). Lots of hands on and hopefully an appreciate of why the CDT is the first class cross-platform C/C++ development environment.

Before I get back into playing with qemu, it was cool to see a new version of the VirtualBox emulator come out, 2.1. It's a minor version increase but there are two significant features added. One, is 64-bit support on 32-bit platforms. This is critical for me and my installer work at Wind River, where I need to test and debug on 32-bit and 64-bit platforms. I don't trust 64-bit Linux enough yet to make it my main Linux environment, not to mention downright fear of 64-bit Windows.

The other cool thing is more on my personal interest front. They have an initial release of OpenGL support. If you read this blog regularly, you'll know I have a dream of an open Linux-based game console/multimedia set top box. I'd like to try some ideas out on a Linux platform with 3D hardware without actually buying any and this is the first emulator to have OpenGL support.

Unfortunately, they only have Windows guest drivers at the moment but have promised Linux/X drivers soon. I can't wait, but it does lead me to drop my plans for working on OpenGL support for qemu. Instead, I really need to spend what little hobby time I have learning how to write an X window manager, using a cross-compile environment with the CDT, of course ;)

Monday, December 22, 2008

I could have had a V8, oh wait, I do

I've always been intrigued by programming languages and what makes them tick, and what is the best one for what situation. That's why Dave Thomas's keynote at ESE still has me thinking about the mix of JavaScript and C++. So much so that I spent a few hours this weekend while waiting out the snow storm to get Google's V8 JavaScript VM building under MinGW for Wascana. I think it would be an intriguing addition to have the VM DLL available for developers using Wascana. With a few changes, I have it building and passing the unit tests and I have a patch into the V8 project. I'll make V8 available in the Wascana 1.0 alpha in the next couple of days.

Now that I have it, I have to ask myself - what the heck do you do with it? I've thought about building wrappers for the wxWidgets library to let you build thick client apps in JavaScript. wxWidgets also comes with Wascana, and thick client apps is kinda what Wascana is all about (aside from dreams of using it for game development, which could also benefit from a fast JavaScript engine).

But it's not clear where one would draw the line between JavaScript and C++. Given a C++ library like wxWidgets, or SDL, or what have you, is it enough to wrap it with JavaScript and have the developer do everything in JavaScript. Or should JavaScript just be this thing on the side that allows for extensibility of some larger application written in C++.

It makes me wonder if I'm following some crazy idea that some madman sold me in a bar in Germany. Or maybe this is challenging me to give it deeper thought, to think about how scripting and native languages are supposed to mix. Where in all this is the sweet spot of architectural balance. Or is there one? Either way, it'll be on my mind over the Christmas holiday season.

Friday, December 19, 2008

Fun with FEEDJIT

I'm not sure if you noticed, or are reading this blog from one of the syndication sites it gets copied too (like Planet Eclipse, or the Wind River Blog Network). But if you check back to the original site and scroll down a bit, you'll see a new panel called the FEEDJIT Live Traffic Feed. I know people express concerns about web things following them, and if I get enough negative response to it I'll pull it off. But in the meantime, I'm spellbound by this feature.

I'm learning quite a lot about the audience for this blog. The traffic feed gives me the city that where the person was, which is spread throughout the world, as well as a hint at how they got to my site. A few people come directly, I guess from an RSS reader where they've subscribed one way or another (Thank you!). More often, though, people end up here based on google searches, and I get the snippet that they were searching for! Creepy, but very useful.

So what are people searching for that pulls up my site? Well a lot of it lately has been the topics I'm most interested in lately, and that's CDT for Windows development, including Windows cross to Linux. It's good to see the interest from the community on that and I am continuing working on Wascana 1.0 as I write this (SDL is building in the background). I also often get a few queries on the Subversion Eclipse plug-in wars (I hate both right now, go git!). And you get the odd one looking for help, like today's "eclipse CDT autocomplete crap" (yeah, it has issues if you're environment isn't set up).

Anyway, it's pretty interesting to watch, and it humbles me immensely to see people from around the world reading what I write, especially when the google search reveals they searched for me by name. But I love to write and share my thoughts and I really appreciate it when people leave comments. Whether I agree with them or not, I always learn something from what they put there. It's a lot of fun and I encourage everyone to do the same. There will always be someone out there interested in what you have to say.

Tuesday, December 16, 2008

Fun with my little VIA console

At the Embedded Systems Conference in San Jose this year they handed out little VIA embedded EPIA systems to the attendees. I'm not sure everyone got one, but I was thrilled. It has a embedded VIA processor with a chipset that includes Unichrome 3D graphics, and also include a hard drive, ethernet, VGA, four USB ports, and audio in and out. It's a cool little unit.



I haven't done too much with it, but thinking about this Open Console concept (set top box with 3D graphics running Linux), I thought I'd try setting it up with some of the things I had in mind. I started by putting the Debian lenny installer onto a USB stick and installing from it. That was a little tricky until I reformated my USB stick and put syslinux on it properly. I installed enough packages to get X running with the openchrome driver for 3D graphics. glxgears ran pretty smoothly which gave me some hope I could actually use this thing to run games.

So I got adventurous and installed Nexuiz, an open source first person shooter. To my surprise, this and other open source 3D games are available from the Debian package repository. So a quick little 'apt-get' which brought down around 450MB of game, and I was off and running. We'll off anyway. I got about 20 seconds per frame, which makes it a little hard to even notice the thing was running.

Anyway, I tried a few other simpler games and they actually worked. I had to force myself to go to bed while hooked on billards-gl. It was fun. But I've slowly begun to realize that games built for the desktop aren't really ready to be played with only a joystick as you'd likely only have in a set top box scenario. So there would be work to be done.

I also started to understand first hand the commercial opportunity behind Linux, embedded Linux especially. Sure you can install a Linux distro and get a desktop environment up without too much effort. But try to do anything off that beaten path and you're in for a lot of work. If you can share in that work, fine. If you can pay someone to do it for you for cheaper than you could do, even better.

I also gave up on using this little VIA box for my play-totyping (hmm, new word). I need to start getting ready for my EclipseCon tutorial which will help me get back into the guts of qemu. Maybe I can do a little work there to bring GLX emulation to it, play time permitting, of course. Or maybe I'll shell out the $500 bucks to build a real system. Though playing in qemu would be funner...

Saturday, December 13, 2008

Time for Distributed Source Control is Now

Imagine this scenario. You're part of a small team that's been following the CDT closely and have adopted it as the IDE for your commercial platform. You grab the CDT source at times convenient to your product deliver schedule and work on a local copy fixing bugs you find as you go through product testing. You're not a committer but you do submit patches from time to time and hope that the CDT team picks them up. But they're often busy with their own delivery schedules and the patches often grow stale and fall off everyone's radar.

So you live with your CDT fork and struggle every time you have to update to a new CDT version, so you don't do that very often. And since you're busy struggling in that environment, you really don't end up with time to get more involved with the CDT. You are a small team and you only have so much time in the day. You run into Doug once in a while at the Eclipse conferences and talk about what you do and promise you'll figure out some way to get more involved, but he knows your story too well and doesn't put much faith in it despite his appreciate for your intentions.

Sounds like I have experience with this, don't I. This scenario is too real and I'd bet is very common across all open source projects. Relying on CVS and Subversion at Eclipse with access controls limited to the select few committers makes it very difficult for those on the fringes to get more involved. It truly is a have/have not environment. The committers have it easy, checking in their changes whenever they want and those that aren't are struggling to keep up, or simply fork and go their own direction.

I've learned that the new Symbian Foundation as selected Mercurial as their source control system. Along with Linus's git, it's one of the new breed of distributed source control systems. These systems allow for multiple repositories and provide mechanism to pull and push changes between them. The introduction chapter of the Mercurial on-line book provides a great description of why this architecture works well for large globally distributed projects.

I invite everyone to read it, especially the Eclipse community. Because I think we need this kind of capability now. CDT needs an infusion of new blood and I know there are a lot of people who work with the CDT code base but have only a limited time to contribute back. If we had the infrastructure to better support them and make it easier to pull their changes into the CDT main line, and easier for them to keep up with everyone else's changes, it could be the formula we need to grow.

Thursday, December 11, 2008

x86, the ultimate applet engine?

I need to watch out or people will start calling me a Google fan boy or something (well, too late). It seems everything they come up with lately grabs my attention. And I guess it makes sense, because they seem to be heading in a different direction than a lot of people, and more in a direction that appeals to me. First Android (open mobile handset), then Google Chrome (Webkit-based browser), then the V8 C++ friendly JavaScript VM, and now, Native Client.

If you haven't heard of it, it appears to be a Google research project into running secured native x86 code in a browser. Yes, we have tried that before with ActiveX and it was a security disaster. But the underlying need for high performance interactive web pages is pretty intriguing. If you could write browser applets in C++, why wouldn't you? I suppose...

I had to try it myself. The install instructions are for Firefox, but I dumped Firefox for Chrome a while ago. It's good that Chrome has some Firefox in it, because all I had to do was copy the plugins for Firefox into my Chrome Plugins directory (it's hidden in Local Settings, Application Data, Google, Chrome, Application, Plugins).

I was then able to go through their little demos and tests. They're cute and the Mandlebrot demo shows some of the power. There's also a demo of the open source SDL version of id's Quake. It's pretty complicated to build and I couldn't get it working on my Windows box (mainly because I'm Cygwin-free and it seems to need it). But it's an interesting idea, taking an SDL-based application and converting it to run in a browser (Native Client uses SDL to do audio and video). Maybe, they'll even expose OpenGL through SDL to the native code as well. That would be more interesting.

One thing though that burst my bubble with this whole experience were the results of the performance tests that they have. The C++ version of the tests were only marginally better than the JavaScript ones. I think that's thanks to the great job they've done with the V8 VM. If that's the case, I really wonder whether this stuff actually makes sense, other than porting old software rendered games to your browser, I guess. I need to stew on that one a little before buying into this idea.

Tuesday, December 09, 2008

A busy day for Khronos

My Khronos.org News feed filled up all of a sudden today. Looks like they've been busy and had a couple of announcements to make.

They released a new version of the 2D OpenVG spec. They added some APIs for text glyphing to make it easier to draw good looking text. I'm not sure anyone really uses OpenVG, especially when you are most likely to be drawing 2D in a web browser with Adobe Flash or SVG (and even then, most likely Flash). From the news release, this is probably most interesting to the mobile crowd.

The more interesting announcement for me was the release of the first OpenCL spec. OpenCL is a standard for running general algorithms on the newer GPUs in video cards. It'll also be ported to other multi-core systems like Cell and DSPs, but most likely you'll be using it with a video card. Of course AMD and nVidia were quick to announce their support for this spec, which gives it some immediate momentum.

OpenCL specifies a C-based language for parallel processing as well as APIs that drive them. Up until now, nVidia and AMD had proprietary solutions that didn't work cross platform. OpenCL opens the door to make parellel programming available to more and more programmers and I'm dieing to see what they'll do with it...

Sunday, December 07, 2008

Wascana 1.0 in Alpha Testing

Well, that didn't take very long. I've spent a few hours building my special p2 artifact repository that manages installed files, including extracting them from an archive and deleting at uninstall time, along with it's associated p2 touchpoint that hooks it all up. It's not a lot of code and you can see it in CDT's CVS space (repo: /cvsroot/tools, module: org.eclipse.cdt/p2).

I've also created a generator that creates p2 repositories that use that touchpoint to install remote artifacts from various locations, mostly on SourceForge. Currently I only have support for the MinGW toolchain and the MSYS shell environment. I'll add libraries as I build them with the 4.2.3 compiler I'm using here. I'll start with SDL and also do wxWidgets and boost. We can always add more later.

It's working very well. Managed build picks up the mingw toolchain and uses it when you select the MinGW toolchain. MSYS doesn't work yet for Makefile projects but managed is usable now. And here's how:

  1. Unzip the Eclipse IDE for C/C++ Developers anywhere you'd like on your machine. You can also start with any other Eclipse install as long as you have the CDT installed.

  2. In Software Updates, expand out the tools/cdt/releases/ganymede site into CDT Optional Features and install the Eclipse CDT p2 Toolchain Installer feature. Allow Eclipse to restart to make sure things are initialized (I'm not sure if you really have to do this, I'm just paranoid).

  3. Go back to Software Updates and add the Wascana repo site at http://wascana.sourceforge.net/repo. Install everything under the MinGW Toolchain category. This time you don't need to restart. You don't even need to apply changes.


Once you're done, you can go to the directory containing eclipse.exe and you'll see the mingw and msys directories there, ready to go. Well at least the mingw dir is, I still need to set up msys correctly to find the mingw compilers, but it is only an alpha :).

Feel free to give it a try and let me know what you think. I'm pretty excited with how this is going. While creating this, a new version of the win32 API component came out and I added it to the repo and the Update... feature found and installed it. Very cool!

It's a very interesting path where this is going. The ability to incrementally add in libraries and update new versions of the components will be a great showcase on how p2 can manage more than just bundles. Not to mention help me build one heck of a Windows development environment based on the CDT and open source tools and libraries.

Friday, December 05, 2008

Linux Kernel Debugging with CDT

Just ran into this awesome tutorial on how to use the CDT for debugging the Linux kernel using qemu's gdb remote debug service that makes it work much like a standard hardware/JTAG debugger.

This was something I played with a while ago when I looked at adding hardware debugging support to the CDT as an optional service. And I believe Elena from QNX has continued on with that work and we should hopefully see it completed for Galileo (if not before that).

But it further solidifies for me how important qemu is as a tool in the belt of the embedded software developer. We've seen it as a key enabler for Android without which I'm not sure it would have achieved the momentum it has. I think there are still issues with it, and of course one I'm looking at is ease at adding new hardware emulation and 3D graphics support. But I think there is plenty of opportunity there and being an open source project, the door is open to help make that happen.

Monday, December 01, 2008

The Future of Wascana

For those that don't know, I've been working on the side on a complete open source IDE distribution for Windows called Wascana Desktop Developer. It includes the CDT and the MinGW tool chain and a handful of libraries that enable cross platform development. I did the original "beta" release over a year ago and have over 12,000 downloads to date. But it's getting long in the tooth and I really need to respin with Ganymede Eclipse/CDT and gcc 4.x.

The question I'm dealing with now is what Wascana should look like going forward. My Wind River team and I are just wrapping up a p2-based installer for our Wind River products that are similar to Wascana but on a much bigger scale and targeting our Wind River platforms. We've learned a lot about how to extend p2 to manage the install, update, and removal of archived binary files into an install tree.

I want to bring that similar experience to Wascana and have started working on an open source version of these extensions. I'm starting doing it as part of the CDT since I need to support CDT 5.0.x with it and want to release around Christmas time. Once I check it in, the p2 team can look and see if the want something like this and give feedback on changes that would be needed to get it into an upcoming platform release.

In the end, Wascana will mainly be a p2 repository that ensures you have all the plug-ins installed to get a working CDT for MinGW, and that will allow you to download and install the MinGW tool chain and libraries, either from their home locations, or from the Wascana SourceForge download area if I need to rebuild for whatever reason. Updates and new components would be done by adding them to the repository.

So the question becomes, do I need an old time installer for this, or would the community be happy simply downloading the Eclipse C/C++ IDE package and working with the Software Updates tool to get everything they need. I have a feeling people will still be looking for that single setup.exe download to set everything up. Then I need to ask whether laying down the bits is sufficient, or whether I need to do a p2 director thing.

The good news is that I sense MinGW is maturing. Despite having an unmanaged release cycle (and I do have a second source for the mingw gcc tool chain thank goodness), it looks like it's ready for prime time, at least for my little distro. Enough so, I'm giving up on Windows debug support. My focus is cross platform, and my time is limited and building a pure Windows debugger is hard and without a significant contribution it won't happen, so I'm not counting on it. Wascana will do just fine without it.

Saturday, November 29, 2008

Javascript and C++, eh?

I can't get my mind off of Dave Thomas's keynote at Eclipse Summit Europe. His words made so many things crystallize in my mind. I've stated many times before in this blog and in my day job, I hate Java. It's an incredible irony that I do my day to day coding in Java to support developers who focus so much on efficiency and performance and use C mainly to accomplish that with a sprinkling of C++ for good measure. And then to hear their constant complaints that Eclipse is too slow. My good friends in Java VM land tell me not to blame Java for that, but you know, it's so tempting.

Dave mentioned that applications should be written in C++ and JavaScript. I dunno. C++ has it's difficulties, there is no doubt. It's hard to write good C++ programs. That's why the mix with JavaScript made me think. Does it make sense to build an application where your hard core performance focused code and code that interfaces with the underlying system is written in C++, but all the code that manages interactions with the user is done in JavaScript?

I've started to take a look at Google's V8 JavaScript engine. As they say in their videos, they're built for embedding in C++ applications and they have implemented some interesting tricks to get JavaScript to run fast, such as a JIT compiler and some heuristics to make class property access faster. As well, they have an efficient memory management system which includes being able to persist snapshots of the heap, including the JITed code, out to the file system for faster startup.

That got me thinking of Eclipse, of course, or really IDE's in general. What if you take a cross platform GUI toolkit like wxWidgets, add in a component model to allow for dynamic extensions, plus rewrite the CDT parsers in C++ for speed, plus ..., and throw in a JavaScript engine like V8 to make it easy for users to program, wouldn't that make for an interesting architecture? But we already have Eclipse so why would we do that all again? Just a question...

Friday, November 28, 2008

An Interesting Ottawa Demo Camp

The Ottawa Eclipse Demo camp was tonight and I thought I'd write about it before I went to bed. The demos were quite interesting, a different mix than before which keeps it fresh. And the hospitality of the Foundation staff was awesome again.

I was especially intrigued by Nick Edgar's embedded web UI demo that he's working on as part of Jazz. This is something I thought of doing for my talk at ESE. Present information in a web page using Eclipse's embedded browser. And then have JavaScript on that page interact with the surrounding Eclipse environment. The workflow he showed was very clean and I think there are some pretty cool things we can do with this. The technique he used was quite a kludge and even he admits it (communicating through the status bar?) But the SWT guys are thinking of better ways and I can't wait to try this myself.

The other interesting demo was from the Zeligsoft gang. I worked with some of the fellows that started Zeligsoft. We were part of the Rose RealTime development team. It was interesting to see the product they've come up with and the simularities it has with the stuff we did back then. They're betting the farm on model driven development. I can't say whether they'll succeed or not, but they've done a few things better, but a lot is the same.

I also have to thank Boris and Eric for their demos on e4 and the model-based UI in particular. I have a better sense of what they are trying to accomplish. Whether it's better or not than what we have today, I'm not sold yet. But I'll have to give it some hands on before making a final judgement.

I also got some interesting feedback on my article on IBM and Eclipse. (BTW, it's not whether we can survive, it's that we better plan and make sure we can, which I think we're finally doing). There were a lot of IBMers at the Demo Camp which was good to see. And there were as many ex-IBMers there too. I think it's pretty healthy. The Eclipse expertise is spreading throughout our small and tight knit town and Ottawa has a great concentration of Eclipse expertise, which makes it a great place to be.

Thursday, November 27, 2008

Long Live the Benevolent Dictator

The last few weeks I've been nose to the grindstone finishing up our first Wind River product release with a new p2-based installer. It's been a while since I've been involved in commercial development and, though it's been grueling and has taken me away from my CDT project lead duties, I can see the light at the end of the tunnel and it looks like we'll be able to ship on time and with good quality, but maybe without all the bells and whistles I had hoped for when we started.

It's good to work in the corporate structure again too. If there are any decisions to be made, we have the processes and organization in place to make sure those decisions get made and that all the loose ends get tied up. It's the only way to succeed. You need that structure to make sure everyone is going in the same direction and has the same objectives.

So that got me thinking. Looking at my involvement with the CDT, I have had feedback that people looked to me as the guy to make the decisions, or at least to adjudicate any conflicts. To be the benevolent dictator at times. And we ended up getting a lot of things done over the years and everyone working on the CDT was going in the same direction. We sort of made up a structure where one didn't really exist, because we needed it to be successful.

I have big fears for e4 on that front. McQ and the IBM gang has made it clear over and over again, including on today's e4 call, that they are working on what they find important, and everyone else should do the same, or nothing will get done. And there are a few things going on. I'm leading the resources effort and we're working on things that individually are important to us and our employers. And clearly the SWT team is doing the same. But as hard as I try, I can't figure out what the UI guys are trying to accomplish. And then there are lots of things in the Eclipse Platform that no one is looking at. Debug, for example.

I firmly believe that even with open source projects, you need that benevolent dictator to actually deliver things. Where would Linux be without Linus? Where would Eclipse be without the early dictatorship of IBM? And there are countless examples. Where you see a successful open source project, you find an individual, or a small team, who make decisions and ensures everyone is working together. I get the sense that people think that's anti-open, but I can't see how a project, open or not, can succeed with out one.

Can Eclipse Survive Without IBM?

I bet you this title got your attention...

Let me tell you a story. It's one a lot of us Eclipse "insiders" know from our trip to Ludwigsburg, Germany. If you look at the program for Eclipse Summit Europe, you'll notice a distinct lack of Eclipse Platform committers, i.e. IBMers, presenting. And one of them was lucky enough to get his travel approval the Friday afternoon before the conference. The Summit was a resounding success despite that. The Eclipse community in Europe has gone well past caring about the traditional Platform and are looking at really cool technologies like OSGi with Equinox and Modeling (despite Dave Thomas' decree that modeling sucks, which it does, at least UML-like modeling).

Now, I'm not sure if this year is any different from previous ESE's. But with the discussions we're having on the EclipseCon program committee about how many IBMers will be able to attend to give their presentations, it's got me thinking. What happens if this apparent trend continues and we loose the commitment IBM has made to Eclipse. Can Eclipse survive without IBM?

Well, I can say Wind River is doing their part to help out. We have myself and Martin O working on the Platform Resources evolution for e4. And we have Pawel who's now a Platform Debug committer. And as always, we're doing major contributions to the CDT and DSDP projects. And the numbers show the Eclipse committer community continues to grow and a lot of projects are healthy.

So can we survive without IBM? Absolutely. In fact, I'd consider the Eclipse Platform feature complete, at least for the needs of IDE and RCP/OSGi vendors. Yeah, things could be cleaned up, and yeah, we could make Eclipse work with Web 2.0 (although I really question whether SWT is the right technology for that). But from what I saw in Germany, Eclipse is alive and well. There are some really cool things that are going on and while the platforms are stabilizing and are probably becoming less interesting (and I'll sadly include the CDT in that list), I get the sense that those relying on the platforms will keep them alive. They have too.

Friday, November 21, 2008

Code Analysis and Refactoring with the CDT

For those that missed my talk at Eclipse Summit Europe, here are my slides. Unfortunately, that's pretty much all the documentation we have on this capability, as I mention in the next steps slide. The community needs to step up and help with this if we want this capability to grow.

Thursday, November 20, 2008

CDT at Eclipse Summit Europe

Well, the closing session is about to start and the vendors are packing up their displays. Another successful Eclipse Summit Europe is about to go off into the sunset. For me, it was proof again why I love coming to this show. The CDT community in Europe is strong and a lot of them are doing and want to do interesting things with the CDT.

The talk I gave was on the code analysis capabilities of the CDT introducing the things you can do with the CDT's parsers and indexing framework. I also introduced the new refactoring engine that we have which really opens up a lot of cool automations you can do to analyze and refactor your code. The best part is that I had a few guys come up to me after to ask about certain analysis things they wanted to do. I'm glad I gave that talk and I hope more people take a look at what the CDT has to offer in this area.

I also had a number of people ask about the CDT managed build system. This is an area in a bit of trouble right now with the CDT. One of the key developers has left and we're struggling understanding the code that he left behind. Hopefully these vendors who have concerns about the build system will join us and get us rolling again. The CDT build model can do some pretty cool things and I look forward to seeing the different build integrations people are thinking of working.

I had a discussion with someone interested in working on the Windows debug integration I have on my wish list. I've given it a couple of tries and there is a start of one in the Target Communication Framework (TCF) agent. Hopefully we can finally get this together and have full support for the Visual C++ compiler with the CDT.

Speaking of TCF, there was a lot of interest in it from various embedded system vendors. It's a really good technology for building target agents with a clean communication protocol back to Eclipse and a services oriented architecture. I've been interested in component models for C/C++ applications and I can see how this agent could use something like that. I'll have to give it some thought and see if others are interested in getting involved in that.

It's been a fun and interesting week. Hopefully I talked to everyone who wanted to talk CDT with me. And hopefully we can get some momentum off of that to continue the growth of the CDT community. Those late nights in the hotel bar with the Eclipse gang was part of that community building and I'm going to sleep well on the flight home but it was worth it.

Thoughts on Dave Thomas' Keynote

Ed Merks already gave a summary of Dave Thomas' keynote yesterday morning here at Eclipse Summit Europe. It was the first time I saw Dave speak and I was warned he tended to say things that offended the audience. And to Dave's point, that is kind of what a keynote speaker should do. Spark thought. Break through the assumptions that we tend to fall into when we get comfortable in our skin. And I think he raised some serious points that are making me wonder about what's really happening in our industry.

I guess his main point is that Java for embedded has missed the boat. If you haven't gone through the pain of doing Java for embedded devices, don't worry, you didn't miss anything. I've been waiting to see when I need to care about Java in this space and I've talked to some of the people here at Eclipse Summit Europe about this. I think they quietly agree with Dave. Those that have figured out how to do Java on embedded are doing OK with it. But there are a lot of issues to face. The worst of them is the bloat that the Java VM continues to grow from release to release. The embedded VMs are horribly crippled, and if you want to use the Sun VM, you are crippled from paring down that bloat. The discussion is interesting, and we may still be proven wrong, but for now, I can ignore Java for embedded and I can sleep at night.

There were some other messages from Dave that hit home as well. Programming is horribly complicated. Normal people will never be able to figure it out. Which means if you have figured it out, you're not normal, and I guess that includes me. But it is true. I've blogged a lot about this in the past. We can barely get our programs to work as it is. Wait until you're trying to program 100 threads running through your mess all at the same time. We're doomed.

But there are some things we can do to give us a chance to survive. Dave talked a bit about how the lack of a software component model is making us look like fools in the eyes of the engineering community. Can you imagine if automakers had to custom build all the components that make up a car? Imagine now if we could go to a shop and pick up high quality software components and tie them together will few lines in a script.

Now Dave was be extreme in his position. There are a number of areas where component models are being used, OSGi is an obvious one, all these "Mash-ups" are doing things like this. But coming back to embedded, we can't rely on Java to provide the solution. Dave's answer was C++ with JavaScript. And I think that's a great idea. Build components in C++ and tie them together with a scripting engine. Dave picked JavaScript, which is OK but he did mention he's working with Google on their V8 JavaScript engine. Lua is another good choice. And actually Domain Specific Languages offer solutions as well (and I'm not just saying that because I'm sitting in Rich Gronback's DSL talk right now ;).

It was really interesting to spend time with Dave Thomas in his keynote and with a group of us at the hotel bar. I could learn a lot from him. This week it was to open up my mind and challenge the assumptions. If you read this blog regularly you find that I tend to do that anyway, but it's an important reminder to keep doing that and make sure we don't make the same mistakes over and over again.

Friday, November 14, 2008

You want to see a busy mailing list?

Just check here: http://lists.gnu.org/archive/html/qemu-devel/2008-11/threads.html

I was looking to see when qemu, a very cool virtual machine for many hosts and many target CPU architectures, was going to come out with a new release. As part of that, I was checking to see if it was under active development. Well, with 55 e-mails on November 13'th when I looked, I guess it is :).

I did find a conversation back in October about 0.9.2 which will likely include some new technology called TCG that will eliminate qemu's dependency on the gcc 3.x compiler. That's good news since I want to release Wascana 1.0 with the gcc 4.x compiler and I want to use Wascana 1.0 as the base IDE for my EcilpseCon tutorial on working with cross-development environments. I hope it all comes together by March.

Speaking of which, only 11 more days until submissions close for EclipseCon. Get them in early and get them in often. And soon!

Wednesday, November 12, 2008

Now that's small.

Just ran into this article on LinuxDevices.com. It talks about a tiny computer that looks like this:



That's all there is to it. This ethernet jack has an ARM9 processor in it with 8 MB RAM, 4 MB Flash, and some interfaces that you can hook up electronic devices to. The idea is to add network connectivity to devices that normally don't, like air conditioners and stuff. Apparently there's even a WiFi version of the thing.

I found a couple of neat things about this device. First it gives you network access to pretty much anything allowing for centralized controllers to manage those things. This is probably old news to guys who work on building maintenance automation systems and stuff like that. But this device somehow made it all real for me.

The other thing to note is the memory size here. 4 MB Flash for the file system isn't very big. And neither is the 8MB RAM you run with. If anyone ever asks if C is still important, I'll just point to this thing.

And finally, I had to have a chuckle when I saw this: "the kit includes an IDE based on Eclipse 3.1.2 and CDT 3.0.2. It supports C/C++ devlopment, CVS code management, and visual debugging via Ethernet." Yet another vendor using the CDT to build cool things :).

Tuesday, November 11, 2008

Design like you'll be there in 10 years

I probably blogged about this a long time ago. I remember watching the news conference for the landing of the Mars Spirit rover. I had watched the landing live over the web and remember the jubilation of the team members as they received the first signal alerting to the safe landing. At the news conference one of the project managers mentioned he had been working on the project for 10 years (through one previous cancellation that is, but still pretty darn good). He was beaming to see the success. And it was well deserved.

That idea entered into my book of software design philosophy: design like you'll be working on the project for 10 years. Think of the responsibility that would mean. In 10 years, you'll be paying for the short cuts and short sightedness. So don't.

Well preparing for my talk on static analysis and refactoring for Eclipse Summit Europe next week (yeah a bit of a late start, it'll be great, though), I finally have my own version of this story to tell.

6 years ago, my good colleague and friend, John and I started down the road of building a C++ parser for the CDT. My mentor at the time thought it was a crazy idea but we had a feeling that we could do it and we plowed ahead and actually got it to work. The parser allowed us to build a more accurate way of populating the Outline View (via the CModel). It then lead the way to indexing to allow for C/C++ Search and Open Declaration to work well. It was tough and we fought the performance battles for most of it, but we soldiered on.

Somewhere along the way we started dreaming of C/C++ refactoring a la JDT. Everyone thought that was a crazy idea (despite secretly wanting it too). With all the madness of the C preprocessor mucking with the source code before it gets to the parser, how could you properly create the TextEdits that the LTK (which the JDT guys generously pushed into a common plug-in, BTW), needed to do the refactoring?

Well John put in a lot of effort and forethought and created a way to map AST nodes to location objects which allowed you to unravel where all the text came from to create the node. It wasn't perfect, but it was a start. And unfortunately due to the untimely end of our funding, we never got to finish it.

Well, I finally got a deep look at the work that Emanuel and is team at the HSR Hochschule für Technik Rapperswil have done on the CDT refactoring engine and early refactorings. Following it through the debugger, I hit it, IASTNodeLocation - that work that John had started years ago but never got to see in action for what it was intended for. It's been fixed up by Emanuel and CDT Indexer Master Markus, but it was doing what we had dreamed about many years ago. Weird, but it actually brought a tear to my eye.

But it really does prove the point. Design as if you'll be working on a project for 10 years. Even if you end up not being there, someone will be, and your work will live on, and it will be much appreciated.

Friday, November 07, 2008

Cross Compiling Fun for EclipseCon

It's been a busy couple of weeks for me as we get our commercial Eclipse p2-based installer into product testing. It's looking good but there's always those last minute fires (i.e. bugs) to fight.

In the background I've been trying to set up an environment that will allow me to use the CDT to build Linux applications from my Windows box, and then run and debug those applications on a customized version of the Qemu emulator that is also built using the CDT. Once I get this environment together, I plan on presenting how to do it at EclipseCon as either a tutorial or long talk. It's a great demonstration on how well the CDT works for multi-platform development.

My first step was to put together a cross compiler. The gcc compiler suite is great at it, but it's not obvious how to do it on Windows. Most GNU packages are hard to build on Windows, even with the MSYS environment from the MinGW gang, or Cygwin.

I first tried with MSYS. I copied over the C library headers and libraries and then tried to build binutils to get the assembler and linker, and gcc itself for C and C++. I was generally following the instructions here. I got really close, but unfortunately I ended up with a linker error when creating the gcc compiler support library (libgcc). Grrr.

Thinking about my reference article a little more, I remembered that even the MinGW developers build MinGW on Linux. I then discovered that the Linux distribution I am using (Debian lenny) already has the MinGW cross-compiler and libraries as a package. So I installed that and I suddenly had the ability to build Windows executables on Linux. So given that, I built binutils and gcc on Linux so that it would run on Windows to build executables for Linux. Wow. That's quite a few levels of indirection. But it worked!

Now all I need to do is build a CDT integration that puts the i686-linux-gnu- prefix on gcc and puts the location of the tools in the PATH and I'm ready to build Linux apps from my Windows laptop.

I'm looking forward to showing this off at EclipseCon. It's talks like this that show practical uses of the CDT and extensions people can build for it that we really need to highlight to the community at EclipseCon. Mine is only one, we need a few more. So if you have an idea, feel free to go to the EclipseCon site and submit a proposal.

Sunday, October 26, 2008

Why a good platform can't be free

I sure am having fun thinking about OpenConsole, i.e., a Linux based set top box that plays in the same space as Microsoft and Sony and Nintendo, but is really an evolution of the Home Theater PC (HTPC) into gaming, but all using open licensing so you don't have to pay the big boys to write applications for this platform. The underlying technologies are pretty cool as I play with adding OpenGL graphics to the qemu emulator. But the business side of it is interesting as well.

In particular, my thoughts turned to multimedia support on open platforms. This is where the insistence on Linux being free is really biting the hand that feeds you. Not all good software can be free. We do live in a world of patents and a lot of the key technology that goes into a multimedia system is protected by patents and require a license to legally distribution implementations of that technology.

You know, I have no problem with that. As I've stated in the past, complex algorithms are hard to get right and multimedia is complex to get good quality results. And I don't blame the creators of this work wanting to get something out of it. If they didn't, they probably wouldn't have created it to begin with and we'd be waiting for some kind soul to donate this for free. Wishful thinking I'd think.

But you know, the costs aren't that bad. One I was looking at was the DVD format licensing. There is a company in Japan that controls this and their pricing information is here. It's about $5K for the book (under NDA), $15K for the license, then another $10K or so for verification. That's not too bad if you're selling thousands of units. But it's also not zero. And the NDA also prevents the implementation from being open source to begin with anyway.

And there are similar fees for the very popular MP3, (minimum $15K). Blu-ray is similar. And some of these are yearly fees. So as you can see, if you want to produce a multimedia platform you can redistribute, the costs are non-zero. So why do people expect these platforms to cost zero...

Friday, October 24, 2008

BMW wants to go open

Ian Skerrett, our fine director of Marketing at the Eclipse Foundation, pointed out this article from MotorAuthority.com. BMW apparently is feeling out the market to see if there is an appetite by tier one manufacturers to work together on an open source stack for in-car infotainment systems.

The concept BMW has in mind reminds me a lot of Google's Android who just recently released all the source to the Android platform for cell phones. Android is Google's attempt to open up the software stack for much the same reason BMW wants it for automotive, to ensure leading edge software applications can be built for those platforms with minimal obstacles. We'll see how well the master plan works, but I like the concept.

That would be quite a twist from the current proprietary mindset that these guys have today, and I'm not sure they are ready for the co-opetition this would take. Of course, we're pretty used to it at Eclipse where platform vendors fighting in this space work together on open source tools. That's fine, since that isn't our core competency and we're building a much better IDE together than we could independently. But that's where we draw the line.

Ian concluded his blog entry by inviting BMW to the Automotive Symposium at Eclipse Summit Europe (I am looking forward to ESE as well!) But this brings up a sore point that we often talk about but one that seems impossible to solve. If they want the software stack to be completely open like Android, then they aren't doing it at Eclipse. The Eclipse Board forbids GPL code within it's walls. But I would think such a stack really could only be done on Linux and that's a non starter. You could look at Symbian which will be EPL in the next few years, but I'm not sure Symbian is the right choice for this, especially if they want to link up with Android.

And this bugs me to no end. We are seeing some serious investment happening in open source platforms, the whole platform. The culture of commercial co-operation on open platforms at Eclipse makes it a natural to host such endeavors, which in turn would raise its profile immensely in the embedded and mobile community. Too bad the Eclipse Board shoots itself in the foot on this.

Monday, October 20, 2008

Fun with RSE

I love my home office setup. I still have an @work office that I go to, but with an Autistic son who's home schooled, I never know when I need to work at home for a day or two so it's good to have something setup so I can continue working when I do. In the office, I have a TV which I'm using to watch the great baseball playoffs happening right now and I'll watch hockey whenever I get the chance too. And while doing that, I get to play on my laptop, like writing here in this blog.

At any rate, tonight I thought I'd try hooking together the virtual machines running in my Windows environment. One is qemu running my simulated OpenConsole thing to which I'll be adding OpenGL support. The other is VirtualBox running a desktop version running the same distro, i.e. Debian, where I'll be building the device driver and app prototypes. VirtualBox has nicer desktop control than plain qemu.

The question comes: how do I get the stuff I'm building on the dev machine to the target. I thought of NFS, which is probably the best choice, but I'd need to spend time figuring out how to set up NFS for this. Instead, I thought I'd try an Eclipse solution, the Remote System Explorer, and hook up everything using SSH.

First, I had to redirect a port on my laptop towards the qemu SSH port 22. The qemu option '-redir tcp:2222::22' did nicely there and I was able to use it to log into my qemu using PUTTY on my laptop. I also decided to forward another port, 2345 to the same port on qemu to allow gdb on my dev machine to talk to a gdbserver on the target using that port.

I then set up the SSH connection in RSE on the dev machine. I used the 'router' address so that the dev machine would connect to Windows on my laptop, which then forwarded the SSH connection to qemu. It was tricky to figure out how to set the port number to 2222 instead of 22, but I found it and it worked like a charm. I used the Terminal view to log into the qemu session from VirtualBox. Cool!

I then tried the C/C++ Remote Launch feature that uses the RSE connection to download and launch into the CDT debugger. When I first tried, the executable on the target didn't have the execute permission set, but once I fixed that, the debugger launched fine. Very cool.

Apart from being fun and interesting, this OpenConsole thing is giving me some real experience on using Eclipse tools to do embedded development with Linux and exercise all that it offers. I am very pleased with it and I think we really need to get the word out how well it does, like a Webinar or something :)

BTW, Go Rays!

Friday, October 17, 2008

Open Console you say

Linux powers "cloud" gaming console.

More info here.

I hate the term cloud, but this is close to the internet appliance/open gaming console I have been thinking about. Specs are damn close too. Although I'm not sure the ATI HD 3200 class graphics (I assume it's the 780G chipset) will do a good job at the games. But it's good to see someone with money came up with a similar idea and has made this concept a reality, or at least is marketing it.

Update

Looking closer to the EVO website, this thing isn't as open as I was thinking. Game developers have to sign and NDA to get the SDK. Odd. They do mention proprietary features that are only in their version of the Linux. What I had more in mind was an open distro that ran on specific hardware specs, but was truly open. Looking at the games they have listed on their web site, they are all open as well that can run on any Linux distro that has OpenGL support and drivers. You don't need to be proprietary...

Windows as a host for Linux development?

Here's something I'm trying to decide as I work through the ultimate development environment for a Linux based "OpenConsole" (and to be clear, I'm talking about set top box class consoles, not mobile). As I mentioned in previous blog entries, I've figured out how to extend qemu to do OpenGL calls on the host and present a PCI interface to the guest to make those calls. All I need is a Linux driver and user space library to use that interface and present the OpenGL (or OpenGL ES) interface to applications that can do games, or what have you.

I figure Linux is a natural host development environment for the device driver. You need to reuse the kernel build system to build it and from what I understand, that build system doesn't work on Windows, not even with Cygwin. So that's a lock, and I can use the Linux version of CDT to build it.

But when it comes to applications, I am wondering how many developers would prefer to use Windows as their development host. From what I understand (again, and I keep guessing here), most game development is done on Windows, even when targeting the "closed" consoles. Actually, XBOX development is obviously done on Windows. But I believe the others have Windows hosted SDKs and tools as well.

However, as with device drivers, Linux should be an obvious choice for application development targeting Linux. This is especially true when targeting PC-type platforms since the host tool chain can actually be used to target the console, and even more true when you're actually using the same run-time lineup.

I get the feeling that there's more to life than writing your Linux targeted application. If, as the developer, you're still relying on a lot of Windows tools or you just plain prefer Windows as a work environment, you would probably want to write your application on Windows as well.

It's funny how we sometimes forget history and the fact that we abandoned our Unix environments for Windows because it had much better tools. And as I (and many others) have discussed, Linux hasn't caught up yet to make us want to go back. So I firmly believe that Windows is an expected host development environment for Linux development, especially embedded. And with the help of gcc's cross compilation abililty and the gcc support in the CDT, it's shouldn't be that hard to put together.

Monday, October 13, 2008

It's all about the Stack

Someone recently pointed me to a presentation that Tim Sweeney (Mr. Unreal engine) from Epic Games gave at POPL (Principles of Programming Languages) 2006. The focus of the presentation was on "The Next Mainstream Programming Language" where he discussed the challenges game developers have with performance and quality and what the next generation language needs to have to help with their problems. I truly believe game developers are at the forefront of software engineering and have the heaviest requirement set for IDEs. And that's why I'm trying to figure out how they work.

Tim's slides talk about the technologies that went into the game "Gears of War" and it's a very interesting mix. While the bulk of the code is C++, there is extensive use of scripting languages as well. And, of course, most modern games make extensive use of Shading languages to manipulate vertices and pixels using the almost teraflop class GPUs we have today. So they could really benefit from an IDE that did more than just C++ or more than just scripting while integrating shader development into the fold.

The other interesting point I got out of Tim's slides was the breadth of software libraries that they were using - DirectX for 3D graphics, OpenAL for sound, Ogg Vorbis for music, wxWidgets for 2D widgets, Zlib for compression, and the list goes on. Apparently they used a mix of 20 libraries to build Gears of War. And it only makes sense as the quality of the software components out there removes any need to build the same functionality yourself.

And I think this is another area where IDEs could improve on, integration of SDKs and automatic setup of build and indexing environments. We do a bit of that in the CDT, at least on the indexing front. And it is something we've talked about on the Build side but we've never really come up with a generic mechanism that would allow you to add SDKs to a project.

Building an IDE to help game developers be more productive would be beneficial to all users of the IDE as I think all developers run into these issues. Maybe not to the same scale but I can see how everyone would benefit from multi-language and software component management support. And, of course, I can't see a better platform to build this other than Eclipse. If we look hard, we'll see that we have lot of this already.

Saturday, October 11, 2008

On the Future of C++

There's been talk for a number of years now on the decline of C++ and the rise of virtual machine and scripting language. But certainly from where I sit, the C/C++ community is still very strong. In fact, I still see many more C applications than C++, especially in the Linux and embedded worlds. Though, everyone agrees, for large applications, doing them in C++ instead of C makes sense.

But I have to admit, for desktop applications, I'm not sure C++ is the right answer like it was in the 1990's. We're certainly seeing Java, with the help of Eclipse, and C# on the .Net side, take a much bigger chunk of the pie chart. And I think that's the right approach. The richness of these environments naturally enables a developer to be much more productive than in the C++ world, especially when dealing with the user via graphical interfaces. I'm pretty much ready to concede this space to those languages. Sad, but true.

But there are a few areas where I don't think C or C++ will every go away. And that's the areas where the developer has the need for speed and where they want to work close to and take advantage of the native processing hardware underneath their application. I often hear from people who would know, that modern Java VMs can actually do better than C/C++ in performance, thanks to run-time optimizations. But projects like LLVM which provides similar optimizations to native applications, may balance the scale there. And at any rate, out of the box, native applications will start with the better performance.

When your writing a high performance application, like 3D gaming, or scientific simulations, or if your working on mobile applications where you need to balance CPU cycles versus battery life, C/C++ will always be the obvious choice. There may be exceptions to the rule, and Microsoft with .Net Compact and OSGi for Java are trying to make a splash, but C/C++ will be difficult to replace.

Thursday, October 09, 2008

OpenGL 3.0 or OpenGL ES 2.0?

First, I have to admit, I'm a newbie at this whole 3D programming world. I watch from a far with a lot of interest but no real experience working in that world. So I apologize ahead of time if this is a stupid question. But I know a lot of CDT users are using the CDT to work with 3D graphic APIs and game engines so I thought I'd bounce this off you.

It was interesting to watch the response to the release of the latest major version of the OpenGL spec. The reception from the game development community was especially interesting. They were furious, at least according to Slashdot. But I can see disappointment in other articles I've read. The question came up: do we declare Microsoft the victor in the OpenGL versus DirectX wars? To which I add, does this spell the end of the dream of gaming on Linux?

From what I gather, there were a couple of issues with the OpenGL 3.0 release. One, the group writing the spec disappeared behind closed doors and sprung it on the world when they were done without really getting the ordinary game developer's input. And in the end, it appears a lot of compromises were made to keep the non-game developer, the big CAD companies from what I hear, happy. So despite discussion of big architectural changes to compete with DirectX, it ends up not even worthy of the major version number.

It highlights the problem of trying to be everything for everyone and how that is impossible in many situations. Maybe the game developers need a special version of OpenGL spec'ed out just for them. If not, they're all jumping on the DirectX bandwagon and see you later.

But that got me taking another look at OpenGL ES, the OpenGL APIs reduced for embedded applications and gaining wide acceptance in the smartphone market. It was interesting to see that the Playstation 3 uses ES as one of it's available 3D APIs. And reading a few forums I've seen comments from experts who think OpenGL ES, at least the 2.0 version centralized around shaders, is OpenGL done right. The drivers are a lot easier to write and the API cuts out almost all the duplication and focuses on efficiency. It does make one think.

For the future of Linux gaming then, should we be looking to OpenGL ES? I don't know how many OpenGL experts read this blog but I'd be interested to hear your comments on this. I recently bought a book on OpenGL ES programming to see what it was all about and it started to make sense to me that maybe this is the right direction. Heck, it almost seems like Khronos's master plan...

Sunday, October 05, 2008

What would I do without CDT

While waiting for my VOBs to sync from Salburg to Ottawa, I thought I'd poke around qemu to figure out exactly what I would need to do to add a PCI device. Apparently, there's very little, if any, documentation on how to do that. And I even saw one response to a similar query that told the guy to go look at the source. So I did.

I started by grabing the source for the latest qemu release 0.9.1. I created a CDT Makefile project and untared the release into the project directory. I created an External Tool to run configure with the options I wanted and then I did a project build which ran the resulting makefiles. So far so good. Looking at the Includes folder on the project, I see it caught the mingw gcc standard headers as well as my project as an include path.

So off I went. First I looked for things beginning with pc_ in the Open Element dialog (Shift-Ctrl-t). There I found the pc init code and went looking there for PCI devices. I found the LSI SCSI device init and hit F3 to go look at the implementation. There I started seeing some generic PCI type things. To see what other PCI devices I could look at, I selected the call to register a PCI I/O region and did a search for references. In the Search results view I quickly saw other PCI devices - VGA displays, the IDE device, some networking things, USB. All good examples.

It wasn't long before I figured out what I needed to do. It got me thinking. How did I ever do this before the CDT and how are the poor guys still stuck in the command line world doing stuff like this. I guess I used to do the same thing but used grep which does simple text searches. But there's no way I could do the same navigation with the same speed. And things like Alt left and right arrow to go back and forth along my path doesn't happen in that environment.

No, CDT rocks. I hear a lot lately about how there are still many people hesitant to leave the safety and comfort of the command line world. I think that's too bad. They're missing out on some real productivity gains.

Saturday, October 04, 2008

QEMU Manager

Shh, I'm supposed to be working, don't tell my boss ;)

I wrote in a previous entry about the VirtualBox SDK, and the potential for using that SDK to add 3d graphics support. I was pretty excited. All I needed to do was create a DLL that could be loaded into the VirtualBox I used for running Linux on my Windows laptop. Well, I tried a simple example, but could never get the DLL to load. Looking at the source code for VirtualBox, I noticed that there's a "hardened" mode for building it. For security, it prevents rogue DLLs from getting loaded. I guess my DLL looked pretty rogue :(. And the complexity of building VirtualBox myself scared me off.

I've also been a pretty big fan of the Qemu emulator, especially for emulating mobile devices. But you can use it for emulating a PC and there is an accelerator driver that apparently makes it fast. So I guess I could give that a try. I've mucked around in the qemu source in the past and I have an idea on how to add a device. It's not as clean as the VirtualBox SDK promised, but it could be done.

Along the way, I found Qemu Manager, a nice GUI that manages virtual machines and launching them on Windows. Very cool. And it's extensible so that if a new version, or a cleverly hacked version, of qemu comes out, you can have it manage launches for them as well.

So this weeks "Open Source Tools Kudos" go to David Reynolds for building the Qemu Manager. Very cool and thanks!

Friday, October 03, 2008

Another Awesome CDT Summit

I just realize I haven't blogged about our CDT Summit last week. Shame on me, because it was a great three days. It was smaller than previous summits, only 16 people. But this time, we didn't have the guys there that were just lurking in person. Everyone was there representing real Eclipse contributions. So we ended up getting a lot of real work done.

We started on the first day by updating eachother on what we were planning for the next release, Galileo, which we determined by the end of the week will be CDT 6.0. It's much more of a marketing number since I don't anticipate huge API changes, but there will be some, especially in the build area.

We then talked a lot about development process and how we can improve the way we work on the CDT. It's a challenge since many of us are only part time on the CDT. But we all agreed that becoming more formal in the way we do things is necessary and we have plans on doing just that.

The next day we broke up into two groups to do some deep dives into specific areas. One group dove deep into indexer issues. They talked a lot about some of the tougher areas that the CDT index and parsing technologies need to deal with, like C++ templates. The biggest new thing from them was discussion on how to represent inactive code, i.e. code that is ifdef'ed out given the current configuration. They settled on starting with doing so in the Outline View at least. Any more is a research activity.

The other group talked about debug. The biggest move there is the integration of the Debug Service Framework from the Device Debugging project into the CDT. I anticipate DSF will be the future standard debugger integration point for the CDT, and maybe even for the Platform. At the same time we will continue to support our existing CDT Debugging Interface (CDI) integrators. I'm also excited about the potential disassembly debugging editor that should hopefully make it easier to look at the object code being debugged.

On the last day we talked about e4 resources and the start of my straw man proposal I have put together. We also talked about the CDT build system, and in particular, the CDT project model that serves as the common model for them all. We've unfortunately lost the main developer of this work before it finished and we need to take another look at it and hopefully simplify it. It's a good lesson that you shouldn't be hands off if you totally depend on an open source component. The people working on that component may vanish and you have no way of getting fixes in.

I really wasn't sure what to expect from this year's summit. In many ways, the CDT is feature complete. And the plan will show that. But we still have a lot of usability and quality issues to address and the team is committed to focusing on that this year. And that will help solidify the CDT's place on the developers desktop. And that was exemplified by Red Hat's presence there. They are committed to making Eclipse the Visual Studio for Linux development. And that's a good place to be.

Saturday, September 20, 2008

A cool multi-platform CDT use case

I previously blogged about the VitualBox SDK and the capability it provides to build some really interesting emulation environments, 3D graphics being the one I'm most interested in at the moment. And this is something I'm seeing in the embedded industry a lot lately. Hardware is expensive. These boxes we have on our desks are very powerful and relatively pretty cheap. Being able to emulate hardware during the software development phase of a project gives the developer the ability to get his code up and running much earlier.

So looking at how I'd build an emulator for a Linux set-top box that had 3d graphic capabilities, it quickly became apparent again how the multi-platform capabilities of the CDT gives me an top class C/C++ IDE to do work on all of the components. Here's what they would include:

  • The 3D graphics emulator is a shared library that VirtualBox loads and, of course, runs on the host. I would start with doing it on Windows since that's my main environment. I'd use either MinGW gcc or the Visual C++ compiler. CDT has support for building both but only debugging with gcc at the moment. But shared library debugging on Windows has always been trouble with the CDT so that might not be important. In the long run, I'd probably also want to do this for a Linux host environment.

  • The box would run Linux, of course. I'd need to be able to build the drivers that talk to the emulated 3d graphics thing. And I'd need to be able to build a bootable image with the kernel and the drivers and any core utilities I would need. Again CDT comes to the rescue, but I'd probably pick a commercialized version that automates Linux kernel development such as Wind River's Linux platform builder. And I'd have to use my Ubuntu development environment on VirtualBox to run it since Linux is the only environment that really supports building the Linux kernel.

  • For the actual user space programs that provide the content, I can use the CDT again. This is the main environment that is used by the community building stuff for the box. gcc's great cross compilation support makes it less obvious whether you'd do this on Windows or Linux. Linux would be a favorite since you can easily share your development workspace with the target using NFS. Something not as easy on Windows.

For me, this is the big advantage of the CDT. You have yourself doing host and target development on Windows and Linux, and even Mac if you wanted to, all using the same tools that have the same UI and keyboard shortcuts. Now, where's that cloning machine so I can actually go build this thing...

Tuesday, September 16, 2008

Another legend has eyes on the future

I just finished reading an interview with another legend of the game programming industry, Epic's Tim Sweeney (Mr. Unreal). First it was John Carmack from id (Mr. Doom) wondering how game developers will be able to harness multi-core technologies to improve game performance. Now I see Tim has a very interesting vision for how these technologies are going to change the industry.

It looks like both of them agree, multi-core general purpose processors will make graphics specific processing units obsolete, at least the fixed function parts of those graphics processors. But Tim seems to have a grasp of what that environment will look like. And it's both exciting and liberating.

Essentially, he sees the return of software rendering, just like we had before the 3d hardware accelerator industry kicked in. Software rendering gives game programmers the freedom to implement whatever algorithms suite their needs, and they aren't tied to the DirectX and OpenGL APIs which Tim says is really tying their hands. They can create whatever data structures they need to represent a scene and do whatever they want to slash that scene onto the pixels.

And he sees building that future with general programming languages and C++ in particular, instead of custom, hardware specific languages. Using C++, you have a shot at simplifying the programmers life, using the same technology for everything compute intensive. Game algorithms essentially come down to doing as many floating point operations at the same time as you can. Of course, this can be done in C++ with the right libraries, or even if necessary, a good compiler that can optimize your code to take advantage of whatever vectorizing capabilities your hardware has.

I anticipate this will be an exciting time for game engine developers. Certainly Tim seems pretty excited about it (and I highly recommend reading this article to catch some of that excitement.) And the good news is that we don't need to invent new technologies to make it happen. C++ will do just fine (with a little help of the CDT, of course ;)

Saturday, September 13, 2008

VirtualBox 2.0 gains an SDK

When you're an Eclipse developer like I am who is taking advantage of Eclipse's cross-platform capabilities, you need to have a bunch of platforms to test your work on. The incredible growth of virtualization on the desktop over recent years has been a huge help for us who don't necessarily want their offices filled with a dozen machines.

I've tried them all and settled on VirtualBox, which was recently bought by Sun, as my Windows Laptop solution (I use KVM on my Linux desktop box). It has the best handling of screen resizing I've seen and thanks to the great support for this in recent Linux distros, the window for the VM flows nicely into my daily workflow.

VirtualBox released their 2.0 version last week. The big news is support for 64-bit hosts and guests (yeah, old news for other solutions, especially given VirtualBox still doesn't support SMP :( ). But what caught my eye was that extra download labeled 'SDK'. Nothing get's me more excited than an extensible platform (well, there are some things...). So I was quick to unwrap their new gift.

The SDK mainly covers APIs they've exposed to build VM management tools, similar to libvirt that's used on Linux platforms. It lets you create, configure, and launch VMs. Cool, and maybe it'll lead to better UIs for this, mind you the one they have is already pretty good.

The more interesting part of this was the last chapter of the SDKRef PDF file. It talks about the mechanism they use to allow communication between the guest operating system and the host. It allows you to create your own drivers that communicate through a virtual PCI device to a shared library on the host. Now the header files weren't shipped as part of the SDK, but they are part of the open source parts of VirtualBox. At least the doc shows you how. Very cool.

Now this comes to one of the most pressing things I wished virtualization could do, use the 3D graphic chips on the host. If I want to experiment with some of the ideas I have for 3D Linux UI frameworks, and still use Windows for my day job, I need something like that. And with this capability opened up for us to use, I can quickly imagine how I could get OpenGL calls from a guest OS out through this mechanism to the host OpenGL libraries. And I guess I'm not alone. In the list of built-in users of this mechanism is a mysterious service called VBoxSharedOpenGL.

Wednesday, September 10, 2008

Time ripe for a Linux console?

I was watching my son the other day on our XBOX 360 that's tucked nicely in our cabinet under the TV with our DVD player, digital cable box, and receiver. He was playing Halo 3, which looks great on our LCD HDTV, BTW. He'd break out once in a while and go back to the Dashboard and send a text message to a buddy then go back into the game and use the headset connected to his controller to talk about his school day with another buddy he was shooting at. It's incredible how far consoles have come from the old Atari boxes we had when we were kids. Now they're these multi-processing entertainment centers and communication devices that hook our kids up to the rest of the world.

It's also interesting how he's migrated away from our PC over to the XBOX. That could be because our PC is getting old and the 360 is actually a more powerful machine. But, still there are still things you can't do on it. That would probably be solved if it had a web browser built into it. But for some reason, and correct me if I'm wrong on this, there doesn't seem to be a web browser available for the 360. Weird. Too bad this is a closed platform that makes it really hard to get open source software, like the Webkit browser engine, ported to it.

So that got me thinking in the context of Linux. Why isn't there a Linux console? Linux is slowly getting better for the desktop and it's about to break out huge in the mobile space, wouldn't it also work well in a box I can put under my TV and use with a wireless keyboard, or game controller with a headset, or with the controllers we have for Guitar Hero and Rock Band? I don't see why not.

Googling the idea, you see the GP2X WIZ handheld I've blogged about in the past, and the sad story that was Indrema that rose with the hype of Linux in 2000 and crashed with the market realities of 2001. And yeah, Linux probably wasn't ready in 2000. But nothing seems to be happening now.

And I'm sure there are economic roadblocks to making it happen. The companies in this industry are huge and are still selling the boxes for less than it costs to build them. Having an open platform makes it pretty difficult to collect the license fees that subsidize the hardware and platform development costs. You'd need a big player with big friends, similar to one of the Linux handheld alliances, to even think of making this happen.

But if it works for handhelds, why not on the TV. At least there it would have a bigger screen...

Tuesday, September 09, 2008

Get out your lambdas - C++0x

I saw a video of a talk by Bjorn Stroustrup, Mr. C++, who people I work with know I call "Barney", affectionately, of course. In the video, he mentioned how badly he wanted to keep the name of the next major version of the C++ standard as C++0x and not have it slip into the next decade. Well, 2008 is almost over so it's going to have to be C++09 if it's to make it. But they are trying hard and making some progress.

And hopefully it does. C++ is due for a good shot in the arm, something to get people excited about. Working every day in Java as I do, and yearning for my C++ days, there are a few features in Java that would be exciting to have in C++. Not many, but there are a few :).

And one of them appears to be ready to be included in the standard, lambda expressions. Now Java doesn't have pure lambda expressions, but the inner class support comes close. And with C++0x support for more general lambda expressions, I think we have a winner on our hands. Here's an example:

int x;
calculateWithCallback([&x](int y) { x = y; });

This ain't your father's C++. To explain what's happening, we're passing an anonymous function that takes a parameter y, and we pass along with it a closure which passes on some of the context with the function, in this case a reference to x. Later on the calculateWithCallback function does something and then calls our function with a parameter value for y. We then execute and assign the value to our x and return.

Anyway, very cool. Callbacks is a very popular design pattern and we use it all the time when programming Eclipse plug-ins. Being able to do something like this in a concise manner in C++ will be very useful and help bring C++ into a new decade, or whenever they get the standard ratified.

Wednesday, September 03, 2008

exit() is your friend

I was just reading the Google Chrome cartoon book (an interesting way of presenting designs). One of the things they talked about was how having browser tabs in separate process helps with memory consumption because memory gets cleaned up with the process exits. Otherwise, the constant malloc/free cycle ends up with memory fragmentation that is hard to get rid of.

That brought back some memories. In my early work on a code generator, I used the same philosophy. I created a pretty big object model in memory after I parsed the input, but I never implemented any of the destructors and never called delete. Didn't need to. It was a short lived process and the call to exit() at the end freed up all the memory anyway. And it's pretty fast! Lot faster than calling delete for each object I created.

Anyway that worked great. Until another team decided they liked my object model and wanted to use it in the main tool. Unfortunately that tool was a long running process and they had to add in the memory management to survive. So much for exit() is your friend. Worked for me, though.

Of course all the garbage collector languages deal with this for you. Makes me wonder why GC in C++ hasn't become more popular. There are C++ garbage collectors like the one from Hans Boehm. But I guess if you're moving to that paradigm, you might as well use Java.

Tuesday, September 02, 2008

Coming Live from Google Chrome

Well, it's live and I've downloaded it and am using it to write this blog entry. It's Google Chrome. It's a beta, but from what I've seen in the couple of minutes I've used it, it's delivering as promised. Very fast and smooth, even typing here. Better than Firefox? Seems like it, but maybe it's the chrome blinding me. And given the news volume about it, there's a lot of people speculating about what Google is trying to accomplish with this thing.

At any rate, if it is about making the Web the OS as we've been trying to do for centuries now, what does it mean to C++ application developers? How do they make their applications relevant in this new world? Is it all over? Do we throw away our C++ compilers and pick up a book on PHP?

I strongly believe there will always be a role for a close to the silicon programming language like C++. Whether it's for resource constrained devices like mobile platforms, or whether it's for high performance apps like image processing or simulations, there's still that need.

What may change is how these C++ apps communicate with the user. I can easily imagine a Web-based UI for C++ apps, similar to other Web 2.0 platforms. Who says the server side needs to be Java or PHP? It could easily be a C++ app. What we need, though, is a clean way to program such a UI. C++ widget programming has always been a challenge, but wait until you change the paradigm like this.

This is one reason I'm keeping an eye on the "Webification of SWT" part of the Eclipse e4 project. The lessons learned and the technology choices made there should be portable to a similar effort in C++. Maybe there's already a C++ widget set out there that we could use to start, like wxWidgets, maybe something else, maybe something new. Either way, it's time for C++ developers to start thinking about what this all means to them.

Monday, September 01, 2008

Google has their own browser!???

Apparently the word leaked on an unofficial Google blog site and they followed up with an "oops" official blog post. Either way, the word is out and the web browser "industry" is in for a shake up. Google is releasing their own web browser called Google Chrome. Apparently it includes pieces from Webkit (I'm guessing the browser part) and Firefox (I'm guessing the chrome part) and will be developed as an open source project.

The first beta will be released tomorrow (Tuesday). I've heard rumors but always dismissed them. Why would they do that when we have a handful of pretty good browsers already. I guess the rumors were true and given the beta comes out now, it's been in the works for a while.

But still, you've got to ask why. Why couldn't they just contribute the stuff the felt was important to Firefox or Webkit. I'm going to guess that it's because sometimes getting your ideas into an open source project is hard. Everyone with a few open source miles under their belt knows how hard it is to influence a community at times and this isn't exactly the first fork in the industry. And when you have the resources and experience Google has, I guess it made more sense for them to fork.

My favorite quote is from the cnet news article where I first stumbled on the news: "Open sourcing the code is a smart way to avoid the 'Google wants to take over the world' fear, but it seems that Google has ambitions to create a comprehensive Internet operating system, including a browser, applications, middleware and cloud infrastructure."

Very intriguing. And this is one of the reasons I'm very interested in Android. Because that's what it is, an internet operating system for mobile. It isn't much of a stretch to take it beyond the cell phone so it'll be very interesting to watch where this goes. (And, yeah, I think Microsoft should be paying attention to this.)

Friday, August 29, 2008

Where's Wascana 1.0?

For those who haven't heard of Wascana, it's a lake in the center of my birthplace, Regina, Saskatchewan, Canada. It's a beautiful oasis in the middle of the bald Canadian prairies and my last trip there inspired me to name my Eclipse CDT distribution for Windows desktop programming after it.

Around this time last year I realized that school was about to start and I rushed out the first release of Wascana 0.9.3. To date I've had almost 9,000 downloads of it showing me that there is interest and a need out there for such a package.

My plan was to get Wascana 1.0 ready for this school year. But my summer has been very busy and I haven't had a chance to work on it. But hear me now and believe me later (I'm sure that was in an old Saturday Night Live sketch somewhere), it is still on my roadmap. For one thing, I really want to make it a showcase for the Eclipse p2 provisioning system showing how you can build a feature rich install and update environment for your whole IDE, not just the Eclipse bits.

Aside from that I want to add the boost C++ libraries to the package. Boost is a very full C++ template library that gives you a lot of the library functionality that makes Java so good, and it's often a showcase for new technologies that end up in the C++ standard anyway.

I'm also waiting for an official release of gcc 4.3.1 for MinGW, to give us the latest and greatest compiler technology from GNU with super good optimization and support for OpenMP for parallel programming. There's also the newest gdb debugger that gives pending breakpoint support so we can get rid of a lot of the kludges we had to put in place to support this kind of thing in the CDT. Unfortunately, Windows debugging for MSVC support isn't as complete as I'd hoped, but there has been progress as part of the Target Communication Framework (TCF) work at Eclipse, so we will get there sooner or later.

And, of course, there's Ganymede, including the latest CDT 5.0.1 which will be coming out with the Ganymede SR1 in a couple of weeks. CDT had some really awesome improvements, including new refactoring support, in the 5.0 stream.

So for those waiting, I'm glad your a patient bunch. The wait will be worth it for this critical piece of my continuing effort to get the grassroots C++ programmers and hobbyists, many of whom are working on Windows, into the Eclipse world.

Tuesday, August 26, 2008

Open Source Handhelds

Quite a while ago now I posted about the open source gaming device from Korea know as the GP2X. At the end of the day, it ended up with a storied history and while I love the concept of a handheld mobile device for which you can write your own applications, their execution as a company out side of Korea wasn't that great and only a distributor in the UK was able to make any kind of splash with it.

At any rate, I found on Slashdot that they have announced a new generation of the product called the Wiz. The links lead you to the UK site and a big JPEG of the brochure in English. The specs look pretty good, ARM9 processor at 533MHz, 3D accelerated graphics, Linux of course, and support for audio and video making it a pretty cool multimedia gaming machine, for which you can write your own applications. And hopefully they'll be a bit more successful at delivering it than the last one.

But there are other choices for such open handheld devices. One of the commentors on Slashdot pointed to another one called OpenPandora. It has better specs, including the TI OMAP3 which is a monster ARM Cortex-A8 processor with full OpenGL 2.0 ES (i.e. with programmable shaders) graphics. It comes at what I believe will be a higher price point than the Wiz, but it is more powerful and has a QWERTY keyboard.

Looking at this in combination with the Linux mobile phone thrusts going on reminds me of the early days of the PC. Lots of different platforms doing specialized things that beacon the hobbyist programmer to come play - VIC 20, Commodore 64, Trash-80, ... The PC is relatively boring today, but maybe these devices can bring in a new generation of programmer that loves to play like we did "back in the day".