And now for something completely different...
Enough about I am Eclipse, Eclipse I am. I actually think Steve Northover really is Eclipse, or is it not Steve's Widget Toolkit? Or is it Crazy Doug's Tooling? Or something...
Anyway, the talk at Tom's Hardware today, or at least this guy's opinion piece, is on AMD's insistence that multi-core is dead, long live Accelerated Processing Units. Actually, it sounds like almost the same thing as multi-core just with fancier hardware as some of the cores. The Opteron architecture makes the array of possibilities interesting and feasible and it'll be cool to see what they come up with now that they've teamed up with my fellow Canadians at ATI.
One possibility I find intriguing is integrating Stream Processors into the concoction. From my Google searches I see stream processing has been around for a couple of years now and it has hit the streets as the technology behind the latest generation of graphics cards. Stream processing is essentially parallel processing units that perform like SIMD processors but on streams of data. In that sense it can handle larger volumes of data like a DSP but with multiple processing units.
So why is that interesting to me. Well, as I am always looking for the next great programming paradigm. When object-oriented first came along when I was in university, I was an early adopter because I could see the benefits it gave me when organizing my programs. I am always looking out for the next big improvement in programmer productivity and we've been stuck now for quite a while with objects and classes and methods and such.
I am positive the next big thing will be parallel programming. The hardware guys are making these great multi-core/multi-processing-thingy machines. The question is what is the right programming paradigm. Is stream processing it? Maybe. But I am thinking that the next big thing has to be multi-dimensional programming in some form or other. I still wonder back to Action Semantics from UML as a possibility, but this stream thing is interesting too...
Hey all. This blog records my thoughts of the day about my life on the Eclipse CDT project. I will occasionally give opinions and news regarding the Eclipse CDT - the project and its ecosystem - and on open source in general. Please feel free to comment on anything I say. I appreciate it when people are honest with me. And, please, please, consider all of these opinions mine, not of my employer.
Tuesday, December 19, 2006
Eclipse is You and Here's Why
There's been some interesting points on the Planet following the "Eclipse is You" post by Bjorn. It has always rubbed me the wrong way when people criticize the committers for not meeting their requirements. And it is not just with these posts, we get it sometimes on the cdt-dev list and bugzilla too. But as I mentioned in my last post, I do appreciate the feedback as it helps me understand what I need to do to grow the community.
But people keep forgetting one thing about the committers. They don't work for the Eclipse Foundation. They are not contractually obligated to do anything, really. I work for QNX Software Systems. They pay me to work on the CDT because it is a fundamental piece of our Momentics IDE. Any work I do beyond that is on my own initiative and if my time is needed elsewhere by my employer, I have to drop those things.
So when people say that the Foundation, or the Eclipse Board for that matter, should get the projects to do this or that, they can't. There is no mechanism in the governance model for Eclipse to make that happen. It just doesn't work that way.
That's why Eclipse is You. Because if you want something done in Eclipse and no one wants to, you have to do it. And, unfortunately, simply submitting patches doesn't work all the time. Because it requires committer time to apply and as I've mentioned, the committers are at the whim of their employers whether they have the time. Not only that, but you may have to persuade the committers that you are doing the right thing.
So we do the best we can and we try to go beyond the call of duty to make sure the community is happy. And most of the time, it works out. But sometimes it doesn't, and I understand the frustration. Remember though that Eclipse is a meritocracy. Submit a number of great patches and help the community out, i.e. go beyond the call of duty yourself, and a committer would be happy to nominate you in as one too.
But people keep forgetting one thing about the committers. They don't work for the Eclipse Foundation. They are not contractually obligated to do anything, really. I work for QNX Software Systems. They pay me to work on the CDT because it is a fundamental piece of our Momentics IDE. Any work I do beyond that is on my own initiative and if my time is needed elsewhere by my employer, I have to drop those things.
So when people say that the Foundation, or the Eclipse Board for that matter, should get the projects to do this or that, they can't. There is no mechanism in the governance model for Eclipse to make that happen. It just doesn't work that way.
That's why Eclipse is You. Because if you want something done in Eclipse and no one wants to, you have to do it. And, unfortunately, simply submitting patches doesn't work all the time. Because it requires committer time to apply and as I've mentioned, the committers are at the whim of their employers whether they have the time. Not only that, but you may have to persuade the committers that you are doing the right thing.
So we do the best we can and we try to go beyond the call of duty to make sure the community is happy. And most of the time, it works out. But sometimes it doesn't, and I understand the frustration. Remember though that Eclipse is a meritocracy. Submit a number of great patches and help the community out, i.e. go beyond the call of duty yourself, and a committer would be happy to nominate you in as one too.
Monday, December 18, 2006
It's all about You!
According to Bjorn, Eclipse is You. Being on the receiving end of many "CDT doesn't do , or CDT is too slow doing ", I couldn't agree more. Say those words and you earn instant membership in the CDT community. And I appreciate every one of them, I actually do. It means you care and have spent the time to contribute your guidance to our collective knowledge. And it's the first step down the path to contributing even more.
Now, Time Magazine has selected the Person of the Year, and it also happens to be You. Coincidence? I think not!
Now, Time Magazine has selected the Person of the Year, and it also happens to be You. Coincidence? I think not!
Wednesday, December 13, 2006
Migrating from Visual C++ to CDT
I've put the finishing touches on the CDT's managed build support for the Windows SDK. Well, at least there's enough there for people to try with our upcoming CDT 4.0 milestone (M4, but it's really our first for this release). It auto-detects where you've installed the compilers, header files, libraries, etc., by looking it up in the registry. I've also updated the error parsers to more accurately parse compile and link errors. It works pretty good and I'm using it to build the native code for the Windows debugger integration.
But, you know, I forgot about the standard builder. It's funny how you get tied up in solving the hard problems when the easy ones are there staring you in the face. I have to give a big thanks to three guys from IBM India who have written a tutorial on how to import Visual C++ projects into the CDT. The solution is elegant in its simplicity and really shows the flexibility of CDT's standard make projects.
All you need to do is get Visual Studio to generate the makefile for you. This is a feature that they've always had to support external builds (although in recent versions you can run Visual Studio headless to do builds as well). Then you create a CDT project at the root directory containing your source. Of course you'll have to change the make command to use nmake, Microsoft's own nasty version of make, but that's pretty easy to do and works well.
Combine that with this guy's perception that the CDT has certain features that Visual Studio users would like, and the discussions I've had with embedded developers using CDT but using Visual Studio for emulation on Windows, gives me a warm fuzzy that supporting the Windows SDK is the right thing for the CDT. There are Windows developers who are looking for a migration path to get into the Eclipse ecosystem.
But, you know, I forgot about the standard builder. It's funny how you get tied up in solving the hard problems when the easy ones are there staring you in the face. I have to give a big thanks to three guys from IBM India who have written a tutorial on how to import Visual C++ projects into the CDT. The solution is elegant in its simplicity and really shows the flexibility of CDT's standard make projects.
All you need to do is get Visual Studio to generate the makefile for you. This is a feature that they've always had to support external builds (although in recent versions you can run Visual Studio headless to do builds as well). Then you create a CDT project at the root directory containing your source. Of course you'll have to change the make command to use nmake, Microsoft's own nasty version of make, but that's pretty easy to do and works well.
Combine that with this guy's perception that the CDT has certain features that Visual Studio users would like, and the discussions I've had with embedded developers using CDT but using Visual Studio for emulation on Windows, gives me a warm fuzzy that supporting the Windows SDK is the right thing for the CDT. There are Windows developers who are looking for a migration path to get into the Eclipse ecosystem.
Thursday, November 30, 2006
Fun with OpenGL ES
I've got two young teenage boys and like a lot of other young teenage boys, they love video games. The computer geek in me, of course, made me wonder how the games were made. So I spent a fair amount of my hobby time a couple years ago learning a bit about the games industry and the technological challenges they face making games look great with limited resources. It was very interesting and if I was a few years (o.k., a lot of years) younger I would have considered a career change in that direction.
But I still poke my head into gaming technology once in a while, especially when I have an excuse to test the CDT on it. One thing that I ran across in my investigation into the needs of embedded developers is support for the OpenGL ES standard. This is a cut-down yet still pretty powerful version of the OpenGL standard that lies at the heart of video games like Doom 3 and the cool desktop effects you see with Mac OS X. The ES version is used on many embedded devices such as cell phones and PDAs.
PowerVR is a chunk of 3D graphics silicon IP that is included in some pretty cool System-on-a-Chip (SoC) parts, often paired with Arm processors. I think you'll see these chips popping up in many new and exciting places. But as this happens, they'll need content to drive their 3D power. And that means a lot of people are going to need to learn how to program to the ES standard.
Imagination Technologies where PowerVR originates came up with a great way to get more people programming for their chips, a Windows OpenGL ES emulation environment. With this environment, you can program an OpenGL ES application and run it on your Windows box instead of having to fork out a lot of money for boards that have the PowerVR core until you get serious about it.
This is great example of why more embedded developers that I run into are excited about the Windows compiler and debugger support in the CDT I am working to deliver for CDT 4.0. With this environment you get a professional quality environment for Windows to build and debug your application with the emulator and then use the same development environment to work with the code as you polish it up for the end device. I'm really looking forward to getting this capability into developers hands so they can build some cool games. For the kids, you know...
But I still poke my head into gaming technology once in a while, especially when I have an excuse to test the CDT on it. One thing that I ran across in my investigation into the needs of embedded developers is support for the OpenGL ES standard. This is a cut-down yet still pretty powerful version of the OpenGL standard that lies at the heart of video games like Doom 3 and the cool desktop effects you see with Mac OS X. The ES version is used on many embedded devices such as cell phones and PDAs.
PowerVR is a chunk of 3D graphics silicon IP that is included in some pretty cool System-on-a-Chip (SoC) parts, often paired with Arm processors. I think you'll see these chips popping up in many new and exciting places. But as this happens, they'll need content to drive their 3D power. And that means a lot of people are going to need to learn how to program to the ES standard.
Imagination Technologies where PowerVR originates came up with a great way to get more people programming for their chips, a Windows OpenGL ES emulation environment. With this environment, you can program an OpenGL ES application and run it on your Windows box instead of having to fork out a lot of money for boards that have the PowerVR core until you get serious about it.
This is great example of why more embedded developers that I run into are excited about the Windows compiler and debugger support in the CDT I am working to deliver for CDT 4.0. With this environment you get a professional quality environment for Windows to build and debug your application with the emulator and then use the same development environment to work with the code as you polish it up for the end device. I'm really looking forward to getting this capability into developers hands so they can build some cool games. For the kids, you know...
Monday, November 27, 2006
printf-free Debugging
When people ask me who the CDT's biggest competitor is, they often expect me to say things like VisualStudio or one of the many Linux IDEs (and no, Netbeans isn't quite there yet). But the truth is that the biggest competitor to the CDT remains good ol' vi and make (or emacs and make for the more advanced developer). We are certainly working hard to make the CDT an easier environment to adopt, but there are still the masses that can not afford the time to climb the learning curve.
But the 'vi and make' answer addresses edit and build. As I've mentioned here before, my favorite IDE feature remains visual debugging. For me, nothing beats that quick glance at the stack and then moving over to the variables view to see what all their values are. Measure that against the number of gdb commands you'd have to enter to do the same, you just can't beat it (did I also mention that I hate typing?).
But looking around the industry, I finally figured out who our main competitor on the debug side is, good ol' printf. Now in some environments where it's hard to set up a debugger and all you got is console output, you have no choice. But how do these guys live with the edit/build/debug cycle every time they want to see a different variable output? And we've all done it at some point in our careers and likely very recently too. We need to make sure we have the right tools to put a stop to this.
At the very least, for the embedded developer, you have JTAG to drive things at the lowest level. And with many of the JTAG devices now supporting the GDB remote protocol, you can use gdb to debug at these levels. The next step is to see the CDT better support GDB running in that configuration. And that's what I'm working on today (sorry Windows debugger, you'll have to wait a couple of weeks).
Tools have an immense opportunity to improve developer productivity. But in order for the developer to benefit from this, the tools need to be easy to learn and use. I think that'll be the next big challenge for the CDT, and one we'll need to address to be truly 'Uber'.
But the 'vi and make' answer addresses edit and build. As I've mentioned here before, my favorite IDE feature remains visual debugging. For me, nothing beats that quick glance at the stack and then moving over to the variables view to see what all their values are. Measure that against the number of gdb commands you'd have to enter to do the same, you just can't beat it (did I also mention that I hate typing?).
But looking around the industry, I finally figured out who our main competitor on the debug side is, good ol' printf. Now in some environments where it's hard to set up a debugger and all you got is console output, you have no choice. But how do these guys live with the edit/build/debug cycle every time they want to see a different variable output? And we've all done it at some point in our careers and likely very recently too. We need to make sure we have the right tools to put a stop to this.
At the very least, for the embedded developer, you have JTAG to drive things at the lowest level. And with many of the JTAG devices now supporting the GDB remote protocol, you can use gdb to debug at these levels. The next step is to see the CDT better support GDB running in that configuration. And that's what I'm working on today (sorry Windows debugger, you'll have to wait a couple of weeks).
Tools have an immense opportunity to improve developer productivity. But in order for the developer to benefit from this, the tools need to be easy to learn and use. I think that'll be the next big challenge for the CDT, and one we'll need to address to be truly 'Uber'.
Wednesday, November 22, 2006
SystemC/qemu - Three Worlds Collide
Way back in February, I wrote about a really cool hardware description language called SystemC. It is essentially a C++ library that allows a programmer to model hardware concepts and included a run-time that simulates the hardware. This is one of the reasons I love C++, you can use templates and overloads and inlines to bring the abstraction layer up a few notches, essentially defining a new language, without having to build a new compiler. And it usually optimizes out to something very fast.
I've also been following the development of qemu, a processor emulator that runs on multiple targets and has extensibility to add emulation of peripherals as well. And, if you have a fast machine, you can almost get the same performance as the real hardware. The best thing I see about qemu is that it really lowers the barrier of entry for people who want to try out embedded development without having to spend real money on real hardware. And it is much easier to carry around to trade shows :).
Well, if you have two very cool simulation/emulation environments, wouldn't it make sense to combine them? Of course, and that's what a group from the Universitat Autonoma de Barcelona have done. They have implemented a bridge that appears on the qemu PCI bus for drivers to access and passes signals back and forth to the SystemC design. It's a great idea and really opens the door wider for hardware/software co-design.
I've also been following the development of qemu, a processor emulator that runs on multiple targets and has extensibility to add emulation of peripherals as well. And, if you have a fast machine, you can almost get the same performance as the real hardware. The best thing I see about qemu is that it really lowers the barrier of entry for people who want to try out embedded development without having to spend real money on real hardware. And it is much easier to carry around to trade shows :).
Well, if you have two very cool simulation/emulation environments, wouldn't it make sense to combine them? Of course, and that's what a group from the Universitat Autonoma de Barcelona have done. They have implemented a bridge that appears on the qemu PCI bus for drivers to access and passes signals back and forth to the SystemC design. It's a great idea and really opens the door wider for hardware/software co-design.
Friday, November 17, 2006
Gotta Love The Wide Screen
I got a new laptop last week after mentioning to my boss that I need to see what those Intel guys are up to with the multi-core support they are adding to the CDT. They already have parallel build for the managed build's internal builder and were discussing parallizing the CDT indexers. (Not to mention I always like to have the latest gadgets, shhh). And my boss obliged.
I was a bit concerned with getting the new laptop since our new standard is for a 15" 1680x1050 wide screen. I don't have the best eyes in the world and the fonts looked pretty small on it. But, hey, it would be cool for watching wide screen movies at home. And after setting things up, I got the fonts looking good enough.
But after setting up my Outlook to pane things vertically and firing up Eclipse on it , I can't believe we haven't been using this format before. What a difference! With Eclipse, I'm always dragging the splitters left or right to reveal the outline view or the navigator views or ^M'ing and sending the Editor full screen so I can see long lines of code. Not any more. I used to use my external 19" LCD for most of my Eclipse work, but now I'm finding enjoying working in either (the 19" is at eye height and has bigger pixels so I can see them better, though).
So my favorite tool of the week is now wide screen LCD monitors! Although qemu is still right up there, and they are now working on adding OpenGL support, more on that when I find out more...
Oh, and, BTW, Blogger has a new editor for creating blog entries. I love what it just did for spell checking.
I was a bit concerned with getting the new laptop since our new standard is for a 15" 1680x1050 wide screen. I don't have the best eyes in the world and the fonts looked pretty small on it. But, hey, it would be cool for watching wide screen movies at home. And after setting things up, I got the fonts looking good enough.
But after setting up my Outlook to pane things vertically and firing up Eclipse on it , I can't believe we haven't been using this format before. What a difference! With Eclipse, I'm always dragging the splitters left or right to reveal the outline view or the navigator views or ^M'ing and sending the Editor full screen so I can see long lines of code. Not any more. I used to use my external 19" LCD for most of my Eclipse work, but now I'm finding enjoying working in either (the 19" is at eye height and has bigger pixels so I can see them better, though).
So my favorite tool of the week is now wide screen LCD monitors! Although qemu is still right up there, and they are now working on adding OpenGL support, more on that when I find out more...
Oh, and, BTW, Blogger has a new editor for creating blog entries. I love what it just did for spell checking.
Monday, November 13, 2006
Sun GPL's Their Java
We've all been waiting to see what Sun will really do when it talks about open sourcing Java. I noticed there was a lot of mistrust amongst the open source communities, but I have no prejudices against Sun so I kept an open mind.
Today (well actually over the weekend some time), Sun officially turned on the pipes and you can download it. You can see it all at java.net. There you will find their implementations for J2ME and J2SE as well as J2EE which they had open sourced earlier. For J2SE, they only have the VM and compiler open sourced. I guess they are still working on 3rd party licensing issues to get the complete JDK contributed.
They have chosen GPL as the license with the classpath exception on the libraries so that you can link commercial code without being affected by the GPL license. I'm always a bit cautious about GPL and you really got to be careful when using it but I think it's a pretty good choice that will help avoid the forking that Sun has worried about and enable the JDK to be shipped as part of Linux distributions, which has been a pain in the you know where in the past.
Time will tell, however, how well Sun can build the community around it. If they begin to allow committers from outside of Sun, I think you'll see over time the mistrust fade. But if it becomes apparent that they aren't that serious about letting others play in the sandbox they've made, then it will all be for naught. But if it is successful, it'll be interesting to see how the Apache Harmony project is impacted since they are essentially duplicating effort but with a different license. We'll need a magic decoder ring to figure this all out...
So being the hacker that I am, I drove directly into the source tree to see what the code looked like. They are using subversion, which I really hope that we at Eclipse.org will switch to one day. I also see that the VM code is written in C++ and looks pretty clean. Which, of course, brings up the question on whether you can use the CDT to work on it. Of course you can! And I'll have to spend some time in the upcoming weeks putting a tutorial together to show how.
Interesting times ahead, of that there is no doubt.
Today (well actually over the weekend some time), Sun officially turned on the pipes and you can download it. You can see it all at java.net. There you will find their implementations for J2ME and J2SE as well as J2EE which they had open sourced earlier. For J2SE, they only have the VM and compiler open sourced. I guess they are still working on 3rd party licensing issues to get the complete JDK contributed.
They have chosen GPL as the license with the classpath exception on the libraries so that you can link commercial code without being affected by the GPL license. I'm always a bit cautious about GPL and you really got to be careful when using it but I think it's a pretty good choice that will help avoid the forking that Sun has worried about and enable the JDK to be shipped as part of Linux distributions, which has been a pain in the you know where in the past.
Time will tell, however, how well Sun can build the community around it. If they begin to allow committers from outside of Sun, I think you'll see over time the mistrust fade. But if it becomes apparent that they aren't that serious about letting others play in the sandbox they've made, then it will all be for naught. But if it is successful, it'll be interesting to see how the Apache Harmony project is impacted since they are essentially duplicating effort but with a different license. We'll need a magic decoder ring to figure this all out...
So being the hacker that I am, I drove directly into the source tree to see what the code looked like. They are using subversion, which I really hope that we at Eclipse.org will switch to one day. I also see that the VM code is written in C++ and looks pretty clean. Which, of course, brings up the question on whether you can use the CDT to work on it. Of course you can! And I'll have to spend some time in the upcoming weeks putting a tutorial together to show how.
Interesting times ahead, of that there is no doubt.
Monday, November 06, 2006
EclipseCon 2007 is (Unnecessarily) Fair
Here we go having a debate again on Planet Eclipse. I apologize to my readers who don't follow the Planet. But then you should. It's always great reading!
As a member the EclipseCon 2007 program committee, I had to take offense to Wassim's remarks about the "Contraversial" EclipseCon 2007. Now he has all rights to state his opinion and we should all respect it and take a good look at what's going on. If he's right, then it is something we need to take a look at and make sure it is corrected.
But I think his statements are a bit off the mark. Bjorn had a nice post that summed up a lot of how I felt about it and agree 100% with what he said there. I'd like to add a bit more from the C/C++ track perspective.
First of all, I submitted a proposal for a short tutorial to the C/C++ track that I supposedly control. I did so because, at the time, I didn't have any proposals and was afraid that the C/C++ community was going to miss out on the opportunity. After doing a little recruiting I was able to convince a few members of the community to put in much better proposals than mine and I plan on rejecting mine in favour of theirs.
Now, if I do run into the situation that I have too few proposals for the tracks that have been allocated, or they are too weak, I will propose to offer them up to the rest of the Eclipse community to make sure we get good quality content. My understanding from the other committee members is that they plan on doing the same.
You can't get much more open than actually showing the allocations that have been given to the various tracks. I was pleasantly surprised to see that we were being that open. I've never seen it before, and it does open us up for criticism so early in the process.
I guess what hasn't been made public is that these numbers aren't necessarily written in stone and that we already have mucked around a bit with them. And we have left the door open to do more of the same. As Bjorn mentioned, we are all focused on making the EclipseCon program the best it can be for the attendees, which will go a long way towards growing our community. And I think it's a great thing to be doing it in the open for others to comment on and help improve.
As a member the EclipseCon 2007 program committee, I had to take offense to Wassim's remarks about the "Contraversial" EclipseCon 2007. Now he has all rights to state his opinion and we should all respect it and take a good look at what's going on. If he's right, then it is something we need to take a look at and make sure it is corrected.
But I think his statements are a bit off the mark. Bjorn had a nice post that summed up a lot of how I felt about it and agree 100% with what he said there. I'd like to add a bit more from the C/C++ track perspective.
First of all, I submitted a proposal for a short tutorial to the C/C++ track that I supposedly control. I did so because, at the time, I didn't have any proposals and was afraid that the C/C++ community was going to miss out on the opportunity. After doing a little recruiting I was able to convince a few members of the community to put in much better proposals than mine and I plan on rejecting mine in favour of theirs.
Now, if I do run into the situation that I have too few proposals for the tracks that have been allocated, or they are too weak, I will propose to offer them up to the rest of the Eclipse community to make sure we get good quality content. My understanding from the other committee members is that they plan on doing the same.
You can't get much more open than actually showing the allocations that have been given to the various tracks. I was pleasantly surprised to see that we were being that open. I've never seen it before, and it does open us up for criticism so early in the process.
I guess what hasn't been made public is that these numbers aren't necessarily written in stone and that we already have mucked around a bit with them. And we have left the door open to do more of the same. As Bjorn mentioned, we are all focused on making the EclipseCon program the best it can be for the attendees, which will go a long way towards growing our community. And I think it's a great thing to be doing it in the open for others to comment on and help improve.
Friday, November 03, 2006
Microsoft Novell
I've been busy working on the CDT integration with the Windows SDK and, at the moment the Windows debug engine, to support C++ development using this SDK as a choice over cygwin/mingw for Windows development. As I've mentioned previously, I'm keen on getting Eclipse and the CDT in a state where it can be useful for Windows developers and open up a whole new community to this great thing we've got going with Eclipse.
Taking a break from debugging the debugger yesterday, I tripped over a Slashdot article that said there was going to be a press conference webcast at 5 p.m. EST announcing a partnership between Microsoft and Novell on Linux. After checking the calendar to mare sure it wasn't April 1, I tuned in. Watching the proceedings, I got that feeling I was watching history, like when Wayne Gretzky announced his retirement. Time will tell whether Microsoft entering the Linux/open source world will change anything, but today, it looks pretty significant.
What was clear was that it is more like Microsoft grudgingly admitting Linux is important with its customers, than Microsoft throwing in the towel. But I think that is an important admission that will change how the open source world views Microsoft and, more importantly, how the Microsoft world views open source and Linux in particular.
Which brings me back to my Windows SDK integration. One of the visions we had for the CDT in the early days, was for the CDT to be the cross platform development environment that eases the transition for Windows developers who want to start working on Linux apps. It was great in theory, but the demand didn't really materialize (and neither did the community). Time will tell whether this announcement changes that. But in the meantime, it has given me a little extra energy to try and make sure the Windows SDK integration happens for both C++ and C# (Mono may have been given a boost with this also) to at least make the path easier to follow.
Taking a break from debugging the debugger yesterday, I tripped over a Slashdot article that said there was going to be a press conference webcast at 5 p.m. EST announcing a partnership between Microsoft and Novell on Linux. After checking the calendar to mare sure it wasn't April 1, I tuned in. Watching the proceedings, I got that feeling I was watching history, like when Wayne Gretzky announced his retirement. Time will tell whether Microsoft entering the Linux/open source world will change anything, but today, it looks pretty significant.
What was clear was that it is more like Microsoft grudgingly admitting Linux is important with its customers, than Microsoft throwing in the towel. But I think that is an important admission that will change how the open source world views Microsoft and, more importantly, how the Microsoft world views open source and Linux in particular.
Which brings me back to my Windows SDK integration. One of the visions we had for the CDT in the early days, was for the CDT to be the cross platform development environment that eases the transition for Windows developers who want to start working on Linux apps. It was great in theory, but the demand didn't really materialize (and neither did the community). Time will tell whether this announcement changes that. But in the meantime, it has given me a little extra energy to try and make sure the Windows SDK integration happens for both C++ and C# (Mono may have been given a boost with this also) to at least make the path easier to follow.
Thursday, October 12, 2006
CDT at the Eclipse Summit Europe
I'm having a great time here at Eclipse Summit Europe. It's the first time I've been in Europe and it's a pretty darn cool place to be. Watching TV and walking around the area has made me really want to learn German, which is my heritage anyway. And the architecture in this area of Esslingen is unbelievably amazing throwback to early Germany. Someone mentioned that it's like something Disney would build.
Jet lag had kicked in hard just before my talk on the CDT DOM so I'll apologize to everyone who was there if it seemed disorganized. I was glad to be able to get the message out on how we are doing with the CDT and the cool things we are working on. The biggest feedback was the interest in the C#DT support I want to build on the CDT. I'm just trying to prove out CDT's multi-language capability but people are intrigued by the undertones of it all.
I also met a few people who were users of the CDT. Everyone gave me positive feedback on the performance improvements with the indexer. One group had even added a few UI features to make their lives easier and I encouraged them to contribute them back to the community, which the appeared happy to do. This is a general message I should remind everyone. If you have made little improvements to the CDT, feel free to contribute them. We're certainly interested in spending time looking at patches especially as it improves the user experience for all.
Given the liveliness of the discussions in the foyer where we had lunches and get togethers, I'd say Eclipse is live and well in Europe. Certainly looking at the list of people who attended the CDT summit, Europe is an important place that we need to focus on more and ensure the community can easily span the oceans, and time zones...
Jet lag had kicked in hard just before my talk on the CDT DOM so I'll apologize to everyone who was there if it seemed disorganized. I was glad to be able to get the message out on how we are doing with the CDT and the cool things we are working on. The biggest feedback was the interest in the C#DT support I want to build on the CDT. I'm just trying to prove out CDT's multi-language capability but people are intrigued by the undertones of it all.
I also met a few people who were users of the CDT. Everyone gave me positive feedback on the performance improvements with the indexer. One group had even added a few UI features to make their lives easier and I encouraged them to contribute them back to the community, which the appeared happy to do. This is a general message I should remind everyone. If you have made little improvements to the CDT, feel free to contribute them. We're certainly interested in spending time looking at patches especially as it improves the user experience for all.
Given the liveliness of the discussions in the foyer where we had lunches and get togethers, I'd say Eclipse is live and well in Europe. Certainly looking at the list of people who attended the CDT summit, Europe is an important place that we need to focus on more and ensure the community can easily span the oceans, and time zones...
Saturday, October 07, 2006
Qemu, my favorite tool of the week
I've been with QNX for over a year now and I continue to learn all I can about our Neutrino operating system, and development issues faced by embedded developers in general. If I'm going to focus on tools for embedded developers, I need to get inside their shoes and walk the talk.
I've worked with a couple of development boards and that was pretty cool, especially when lights started flashing and it started communicating out the serial and ethernet ports. But carrying around a board, even a little one, is a bit cumbersome and not very practical. I've been a long time fan of VMware and have used that. But, x86 is not where the excitement is at in the mobile embedded market, so I have been looking for something that was less desktop-y to use.
Recently I ran accross an article on using Qemu, an open source processor emulator that can emulate a number of different processors and peripheral devices, to run a Debian Linux ARM installation. It also claims that on faster machines, it can even be faster than the actual ARM processors it is emulating. This is exactly what I was looking for.
So I've started down a journey of writing a Neutrino BSP (Board Support Package) for Qemu, which emulates the ARM Versatile Platform Baseboard. I have the programmers guide for this board which is a pretty good help, but thanks to the fact that I have the source for the emulator, if I have any questions about how certain registers work, I can just look at the implementation in Qemu. The architecture is very clean so it makes these things very easy to find. It also includes a gdb remote interface that acts like a JTAG interface so I can step through my code.
With all these goodies, I think I'll have Neutrino up and running in this environment in no time. Very cool! There is a tutorial floating around by James Lynch on using the CDT with a tiny ARM-based board. I should write a similar tutorial on using Qemu's ARM emulation with the CDT and make sure these environments work well. It's a great learning tool if nothing else.
I've worked with a couple of development boards and that was pretty cool, especially when lights started flashing and it started communicating out the serial and ethernet ports. But carrying around a board, even a little one, is a bit cumbersome and not very practical. I've been a long time fan of VMware and have used that. But, x86 is not where the excitement is at in the mobile embedded market, so I have been looking for something that was less desktop-y to use.
Recently I ran accross an article on using Qemu, an open source processor emulator that can emulate a number of different processors and peripheral devices, to run a Debian Linux ARM installation. It also claims that on faster machines, it can even be faster than the actual ARM processors it is emulating. This is exactly what I was looking for.
So I've started down a journey of writing a Neutrino BSP (Board Support Package) for Qemu, which emulates the ARM Versatile Platform Baseboard. I have the programmers guide for this board which is a pretty good help, but thanks to the fact that I have the source for the emulator, if I have any questions about how certain registers work, I can just look at the implementation in Qemu. The architecture is very clean so it makes these things very easy to find. It also includes a gdb remote interface that acts like a JTAG interface so I can step through my code.
With all these goodies, I think I'll have Neutrino up and running in this environment in no time. Very cool! There is a tutorial floating around by James Lynch on using the CDT with a tiny ARM-based board. I should write a similar tutorial on using Qemu's ARM emulation with the CDT and make sure these environments work well. It's a great learning tool if nothing else.
Friday, October 06, 2006
Open Source, The Double Edge Sword
I know this has been talked about a lot in the open source industry, but I've personally started to see concrete examples of it. Commercial adoption of open source software is a double edge sword. One the one side, it is great to have the flexibility to be able to take open source solutions, adapt them to your particular problem domain, and add the value of those solutions to your product. On the other side, it is a lot of work to do that, and in a commercial setting, lot of work == big expense.
And, of course, you really have to watch the pointy end of the sword that could kill your project, and I think that pointy end is unrealistic expectations. But, I think the industry is really starting to get that open source != free. And, unfortunately, I think a lot of it comes from war wounds, but the lesson is getting learned.
This does, however, come back to the software as a service story I keep blabbing about. Now, you can either hire and train people to do the work, or you can contract out to a services company that (hopefully) already has the trained people to do the work for you (hopefully) cheaper. It was pretty interesting then when I noticed a little-advertised feature of the Eclipse.org web site, the Services section. It lists a few companies that are offering services to the Eclipse community. It is unfortunate that there are so few and very few with a non-Enterprise focus, but the opportunity is there for those who want to take it.
And, of course, you really have to watch the pointy end of the sword that could kill your project, and I think that pointy end is unrealistic expectations. But, I think the industry is really starting to get that open source != free. And, unfortunately, I think a lot of it comes from war wounds, but the lesson is getting learned.
This does, however, come back to the software as a service story I keep blabbing about. Now, you can either hire and train people to do the work, or you can contract out to a services company that (hopefully) already has the trained people to do the work for you (hopefully) cheaper. It was pretty interesting then when I noticed a little-advertised feature of the Eclipse.org web site, the Services section. It lists a few companies that are offering services to the Eclipse community. It is unfortunate that there are so few and very few with a non-Enterprise focus, but the opportunity is there for those who want to take it.
Tuesday, September 26, 2006
JDT/CDT, Can't we just get along?
So between putting the finishing touches on CDT 3.1.1, some QNX work items, and dealing with the various summits and stuff, I've started working a little on my CDT/Windows Debugger API integration. It didn't take too long before I got my workspace set up to work on it. I'm creating Java classes that will plug into the CDT's debug engine (or engines as the new Debug Services Framework comes together). I'm also creating C++ code to implement native methods that talk to the Windows APIs. I'm also about to figure out the right way to do callbacks from those APIs trough my C++ code up into the Java code.
I'm trying to follow as much as I can the SWT model by doing as little in the native code as possible and putting most of the logic in Java. As much as I prefer C++ to Java, I don't have a good answer to the question "How do you debug the debugger" so I'll be relying on the Java debugger and good ol' printfs to get me through.
But once you have a mix of CDT and JDT projects, the workflows aren't pretty. The first thing that hit me was when I was in the C/C++ Perspective and hit the New Class button to create a Java class. Uh, nope, that creates a C++ class. So I'm starting to find myself flipping back and forth between the Java and C/C++ perspectives to get the right navigator views and toolbar buttons at the right time. It's a pain and it would be nice if we had a "Code" perspective or something where we could write code without the context switching.
And then, there's the result of my work. When I do get the debugger integrated, I may just wish to debug my debugger or some other JNI application with it. The "Holy Grail" for us in CDT-land has always been to be able to step from Java code into C++ code and back and forth seamlessly.
We've talked about this since the first CDT get-together back in July 2002. I've tried to get it to work with some success but got sidetracked with other things. The guys at Intel presented a proposal on such an environment at our Spring summit, bit I haven't heard much about that since. I'm interested in hearing peoples' opinion on whether this is something they feel is important for Eclipse. And, of course, I'm interested in ideas on how we can build a community to help make this happen. If nothing, it will be a lot of work.
I'm trying to follow as much as I can the SWT model by doing as little in the native code as possible and putting most of the logic in Java. As much as I prefer C++ to Java, I don't have a good answer to the question "How do you debug the debugger" so I'll be relying on the Java debugger and good ol' printfs to get me through.
But once you have a mix of CDT and JDT projects, the workflows aren't pretty. The first thing that hit me was when I was in the C/C++ Perspective and hit the New Class button to create a Java class. Uh, nope, that creates a C++ class. So I'm starting to find myself flipping back and forth between the Java and C/C++ perspectives to get the right navigator views and toolbar buttons at the right time. It's a pain and it would be nice if we had a "Code" perspective or something where we could write code without the context switching.
And then, there's the result of my work. When I do get the debugger integrated, I may just wish to debug my debugger or some other JNI application with it. The "Holy Grail" for us in CDT-land has always been to be able to step from Java code into C++ code and back and forth seamlessly.
We've talked about this since the first CDT get-together back in July 2002. I've tried to get it to work with some success but got sidetracked with other things. The guys at Intel presented a proposal on such an environment at our Spring summit, bit I haven't heard much about that since. I'm interested in hearing peoples' opinion on whether this is something they feel is important for Eclipse. And, of course, I'm interested in ideas on how we can build a community to help make this happen. If nothing, it will be a lot of work.
Right Place, Right Time (part 2)
Having concluded that I haven't done anything really to help guarantee the CDT's success, I do have a few mantras that are hopefully contributing to a healthy community.
- Be open. This was actually pretty hard at first when it was only QNX and then only QNX and IBM/Rational when the only people who cared about what you were doing were sitting down the hall or across town. But with the CDT development community spread around the world, working in the open is critical. In the CDT we have healthy discussions in Bugzilla and on the cdt-dev list and are looking at ways to share ideas and work together more often and more fluidly.
- Equality. A lot of open source projects tend to be dominated by one or two organizations and unfortunately a lot of Eclipse projects are this way. In the CDT, no one dominates. No organization has more than around 5 developers and most are around 2. And there are over 10 organizations involved. We honor the veto committer voting system so we aim to get consensus before taking big steps and usually do.
- Spread the Word. At times I feel like I'm a part of our marketing team here at QNX, and I guess part of my role is that. You can hope people stumble onto your project and get interested, but the media has a role to play helping you spread the word. And with the number of online magazines and webinar services in business now, they are always looking for a new angle. Take advantage of it.
Monday, September 25, 2006
Right place, Right Time (part 1)
One thing I've been asked recently is to share why the CDT is successful in the Eclipse community. It's a really hard question for me to answer since I'm not sure I can trace anything I've done as a project lead to help with this success. That and I'm not sure whether it really is successful. I've been pretty happy with its popularity with over 340,000 downloads of CDT 3.0.2 and 35 developers attending last weeks CDT summit. I think we still have a long way to go to reach the quality levels of the JDT and VisualStudio, but now that we have so much attention on the CDT, we're trying to address that. So, I think the real question is - why is the CDT so popular, and what have we done as a project to help achieve that. My answer is in two parts so I'll make this a two part blog entry.
So the first part of my answer is this: "Dumb luck", or maybe slightly less self-deprecating, "Being at the right place at the right time". QNX started the CDT back in 2002 because we needed an IDE to help developers writing applications for our operating system be more productive. Now, we're not an IDE company and seeing what IBM had in store for building an open source community around Eclipse, we reckoned that would be the right way to go for the CDT as well. The hope was that lots of other non-IDE companies needed an IDE too and we could all share the development cost of it.
It was a gamble and it did take four years to reach this point, but in the end we were right. The reason the CDT is so popular is that there is a huge need in the non-Windows market for a universal IDE that vendors and users can easily leverage for their own needs. Given the huge popularity of Eclipse and with the CDT being the C/C++ solution for Eclipse, it just becomes natural that people gravitate to the CDT. That and the CDT promises to be a high quality, feature rich C/C++ development environment that you have had to pay money for in the past. Everyone like free stuff that's good.
So in the end, I don't think we've done anything in particular to help make the CDT as popular as it is other than simply having the right solution at the right time. I wish I can claim otherwise, but it is what it is. In the next part of this blog entry, though, I will try to list some of the things we've tried aimed at making sure the CDT is an open, welcoming community that will hopefully keep this momentum going. Having something good and free helps with consumption of your open source project, but it doesn't provide any guarantees that it'll attract developers to help you build and test it.
So the first part of my answer is this: "Dumb luck", or maybe slightly less self-deprecating, "Being at the right place at the right time". QNX started the CDT back in 2002 because we needed an IDE to help developers writing applications for our operating system be more productive. Now, we're not an IDE company and seeing what IBM had in store for building an open source community around Eclipse, we reckoned that would be the right way to go for the CDT as well. The hope was that lots of other non-IDE companies needed an IDE too and we could all share the development cost of it.
It was a gamble and it did take four years to reach this point, but in the end we were right. The reason the CDT is so popular is that there is a huge need in the non-Windows market for a universal IDE that vendors and users can easily leverage for their own needs. Given the huge popularity of Eclipse and with the CDT being the C/C++ solution for Eclipse, it just becomes natural that people gravitate to the CDT. That and the CDT promises to be a high quality, feature rich C/C++ development environment that you have had to pay money for in the past. Everyone like free stuff that's good.
So in the end, I don't think we've done anything in particular to help make the CDT as popular as it is other than simply having the right solution at the right time. I wish I can claim otherwise, but it is what it is. In the next part of this blog entry, though, I will try to list some of the things we've tried aimed at making sure the CDT is an open, welcoming community that will hopefully keep this momentum going. Having something good and free helps with consumption of your open source project, but it doesn't provide any guarantees that it'll attract developers to help you build and test it.
Friday, September 22, 2006
CDT Fall Summit Wrap-up
When I finalized the agenda for the Fall Summit this year, I didn't think there was any way we'd fill up 3 days. Thinking back to last year, we really ran out of things to talk about by noon on the third day. I also figured that it would be a great idea if we had some time to go through the code and work through some of the nitty gritty details with the gang huddled around a laptop. So I decided to set aside Thursday afternoon for that.
Well, at the end if it all, given the number of topics we had to chop out and the number of items where I had to say that we were running behind, we could have spent a whole week. Mind you our brains would have been mush. They were anyway after three days. It was great to see that we have a big development community that knows a lot about the CDT and want to make it even better. It also showed that we need to do this more often, maybe not travel, but find some way to share ideas and debate even virtually.
One of the best items we had, at least for me, was at the very end. I asked the group how we could improve how the CDT is run as a software project. The answer I got back was that we need to work hard on ensuring we have quality releases. In the past, we've been very accommodating to developers, accepting that they come and go and contribute what they can when they can. But that adhoc approach to project management isn't leading to high quality releases, especially at the x.x.0 releases. The team showed a strong desire to, well, be "managed" as a software development team, much like they are when working on their own commercial projects.
So that is now my number one challenge. We need to tighten down the processes, be more strict on quality, and start putting together guidelines that we need the developers to follow. We also need to ensure that our test coverage is managed and improved. Manage the CDT much like any software development project. To me the big challenge is that none of these developers have any contractual obligation to follow any of this. And we have developers from over 10 different organizations. This is open source and they are volunteers (or at least their organizations have volunteered them). So it is going to be a bit of a delicate balance to ensure we have the right mechanisms in place and that the developers honor them.
But at the end of the day, I think just having processes and guidelines will give the developers something to follow and they will probably feel naturally obliged to follow. And with the strength of the characters that we have working on the CDT, I'm sure a little peer pressure will help too. I am very excited about moving into this next stage in the maturing of the CDT project. If it all works, maybe I'll do an MBA thesis on it :).
Well, at the end if it all, given the number of topics we had to chop out and the number of items where I had to say that we were running behind, we could have spent a whole week. Mind you our brains would have been mush. They were anyway after three days. It was great to see that we have a big development community that knows a lot about the CDT and want to make it even better. It also showed that we need to do this more often, maybe not travel, but find some way to share ideas and debate even virtually.
One of the best items we had, at least for me, was at the very end. I asked the group how we could improve how the CDT is run as a software project. The answer I got back was that we need to work hard on ensuring we have quality releases. In the past, we've been very accommodating to developers, accepting that they come and go and contribute what they can when they can. But that adhoc approach to project management isn't leading to high quality releases, especially at the x.x.0 releases. The team showed a strong desire to, well, be "managed" as a software development team, much like they are when working on their own commercial projects.
So that is now my number one challenge. We need to tighten down the processes, be more strict on quality, and start putting together guidelines that we need the developers to follow. We also need to ensure that our test coverage is managed and improved. Manage the CDT much like any software development project. To me the big challenge is that none of these developers have any contractual obligation to follow any of this. And we have developers from over 10 different organizations. This is open source and they are volunteers (or at least their organizations have volunteered them). So it is going to be a bit of a delicate balance to ensure we have the right mechanisms in place and that the developers honor them.
But at the end of the day, I think just having processes and guidelines will give the developers something to follow and they will probably feel naturally obliged to follow. And with the strength of the characters that we have working on the CDT, I'm sure a little peer pressure will help too. I am very excited about moving into this next stage in the maturing of the CDT project. If it all works, maybe I'll do an MBA thesis on it :).
Wednesday, September 20, 2006
More Open Source Hardware
I've got Google Alerts notifying me when something comes up with Eclipse CDT and another one for Eclipse embedded. I'm starting to get a few of these a week, including two today. One came from Lattice Semiconductor. What was noteworthy in this case was they were also open sourcing the design for their 32-bit microcontroller. Now these guys are a lot smaller than Sun who open sourced the design for their Niagara 8-core Sparc chip. But I'm starting to wonder if there really is a trend happening here.
If this means we'll see more people customizing chip designs using hardware description languages and building the software that will run on them, then Eclipse is an obvious host for this kind of hardware/software codesign activity.
If this means we'll see more people customizing chip designs using hardware description languages and building the software that will run on them, then Eclipse is an obvious host for this kind of hardware/software codesign activity.
CDT Summit Day 1
Well we got off to a great start at the Summit today. The day went by real fast and we were all pretty burned out by the end of it so it must have been good :). One highlight for me was when I asked for hands on who was a committer. I got the 7 or so I was expecting. I then asked who had contributed patches. To my happy surprise, I got over 20. That explains why we have so many patches outstanding in bugzilla for the CDT. It is certainly one sign the CDT contributor community is healthy, but its also a sign that we have a lot of work ahead to keep up and to start nominating more committers.
We spend the day introducing eachother and then dug deep into the CDT DOM. I have to admit that one was really dry, and I was the one giving it. We then got an update from the Intel team on what they'd like to do with the build information in the new project wizard and the project properties. Looks like a big change that should hopefully smooth out some workflow issues that we have there. Another big day tomorrow as we review some of the new source navigation and indexing features and dig into debug.
Another thing I'm trying is Skypecast to broadcast the proceedings. You can check it out by following the link. It is definitely a technology preview and we had a hard time getting remote people hooked up to the sound system we have running without horrible echo and feedback. But the broadcast out sounds O.K. (as long as I mute the mic on my laptop, sorry Norbert!) I'm sure it would work better if everyone was working through headsets, instead of trying what we're doing with capturing the audio through a sound board. But it is an interesting way of communicating when working on open source projects.
We spend the day introducing eachother and then dug deep into the CDT DOM. I have to admit that one was really dry, and I was the one giving it. We then got an update from the Intel team on what they'd like to do with the build information in the new project wizard and the project properties. Looks like a big change that should hopefully smooth out some workflow issues that we have there. Another big day tomorrow as we review some of the new source navigation and indexing features and dig into debug.
Another thing I'm trying is Skypecast to broadcast the proceedings. You can check it out by following the link. It is definitely a technology preview and we had a hard time getting remote people hooked up to the sound system we have running without horrible echo and feedback. But the broadcast out sounds O.K. (as long as I mute the mic on my laptop, sorry Norbert!) I'm sure it would work better if everyone was working through headsets, instead of trying what we're doing with capturing the audio through a sound board. But it is an interesting way of communicating when working on open source projects.
Monday, September 18, 2006
CDT Summit Eve
The CDT Fall summit for this year starts tomorrow here at QNX headquarters and I'm getting both excited and nervous, as I guess I should. We've had a number of summits in the past starting all the way back with the first one in July 2002 when QNX brought the CDT as we know it to the world. That summit and pretty much everyone since then has had kind-a the same feel. Lots of people have been interested in what was going on, but few have had the resources to commit to helping out.
But you know, there's nothing wrong with that. It's a pretty difficult decision for corporations to commit resources to work on open source projects. It's difficult to track the return on their investment and there are a slough of legal issues that need to be addressed and tracked to make sure the IP walls are set up correctly, and I'm not just talking networking. I've learned to be patient, not get too down when hopes fail to materialize. In the end, simply using the CDT and distributing it in their products means that the CDT is getting good test coverage, which is just as important these days.
My feel for this one, though, is different. Maybe it's because I'm starting to be overly optimistic. We have around the same number of attendees registered as we did last year, and a lot of them are the same faces. However, this year most of the attendees have been contributors to the CDT. Some have become committers over the year and some will become committers shortly after the summit. We'll use this as an opportunity make sure we are talking to eachother and co-ordinating our work. We'll also use it as a team building exercise as I'm sure we'll find a few battles along the way. It sure helps when you know the person at the other end of the bugzilla entry when smoothing over issues.
Finally this year, it looks like our contributors summit will focus much less on recruiting contributors and much more on co-ordinating actual contributions. It's a much funner summit to run and hopefully a much funner summit to attend.
But you know, there's nothing wrong with that. It's a pretty difficult decision for corporations to commit resources to work on open source projects. It's difficult to track the return on their investment and there are a slough of legal issues that need to be addressed and tracked to make sure the IP walls are set up correctly, and I'm not just talking networking. I've learned to be patient, not get too down when hopes fail to materialize. In the end, simply using the CDT and distributing it in their products means that the CDT is getting good test coverage, which is just as important these days.
My feel for this one, though, is different. Maybe it's because I'm starting to be overly optimistic. We have around the same number of attendees registered as we did last year, and a lot of them are the same faces. However, this year most of the attendees have been contributors to the CDT. Some have become committers over the year and some will become committers shortly after the summit. We'll use this as an opportunity make sure we are talking to eachother and co-ordinating our work. We'll also use it as a team building exercise as I'm sure we'll find a few battles along the way. It sure helps when you know the person at the other end of the bugzilla entry when smoothing over issues.
Finally this year, it looks like our contributors summit will focus much less on recruiting contributors and much more on co-ordinating actual contributions. It's a much funner summit to run and hopefully a much funner summit to attend.
Friday, September 08, 2006
Open Source Hardware
I'm sure I've told this story before somewhere, but I used to sit across from an ASIC designer many years ago. This is when I first started using ObjecTime Developer for software modeling back in the early 90's, a couple of years before I joined the company. He was marveling at how we software designers had started using graphical tools, just as ASIC designers were abandoning similar tools for textual description languages. I'm not sure why they made that transition, but given the complexity of the chips they were designing, my bet is that the graphics didn't scale well for them and the tools back in the early 90's weren't very good - no Eclipse back then!
This guy was coding in a hardware description language called Verilog. I peaked over his shoulder one day and saw that it looked a lot like C code. I found that very interesting but it took many years before I sat down and took the time to learn a bit more about the language and what it could do (there wasn't much of an Internet back then either). It indeed was C-like and was structured a lot like C, and I'm sure suffers the same scalability issues that programming in C can sometimes cause. Thankfully, there is an Eclipse plug-in to help you write your own Verilog code.
Fast forward to the recent future and my interest in MultiCore processing, I found it quite interesting when Sun announced that they were open sourcing their Niagara line of processors. Diving deeper, I was able to find the Verilog code for their T1 chip published on www.opensparc.org. Other than being cool to look at and maybe interesting for students to learn CPU design with, I didn't really see the benefits of open sourcing a CPU design.
Then yesterday, I ran across an announcement from Simply RISC that their engineers had taken the open source T1 code and made a simple SPARC embedded processor out of it. Of course with the T1 source being GPLed, they have released the source for their CPU as well. Is this the start of something? I'm still a bit doubtful. Chip companies make most of their money on the designs they come up with, not necessarily the chips themselves. But it is an interesting phenomenum to watch out for.
This guy was coding in a hardware description language called Verilog. I peaked over his shoulder one day and saw that it looked a lot like C code. I found that very interesting but it took many years before I sat down and took the time to learn a bit more about the language and what it could do (there wasn't much of an Internet back then either). It indeed was C-like and was structured a lot like C, and I'm sure suffers the same scalability issues that programming in C can sometimes cause. Thankfully, there is an Eclipse plug-in to help you write your own Verilog code.
Fast forward to the recent future and my interest in MultiCore processing, I found it quite interesting when Sun announced that they were open sourcing their Niagara line of processors. Diving deeper, I was able to find the Verilog code for their T1 chip published on www.opensparc.org. Other than being cool to look at and maybe interesting for students to learn CPU design with, I didn't really see the benefits of open sourcing a CPU design.
Then yesterday, I ran across an announcement from Simply RISC that their engineers had taken the open source T1 code and made a simple SPARC embedded processor out of it. Of course with the T1 source being GPLed, they have released the source for their CPU as well. Is this the start of something? I'm still a bit doubtful. Chip companies make most of their money on the designs they come up with, not necessarily the chips themselves. But it is an interesting phenomenum to watch out for.
MultiCore: The True Promise of Eclipse
So, I needed a board to help try out some JTAG things (for those readers not involved with embedded development, a board is a little computer kind-a thing). We had just received an OMAP board which uses a TI chip that contains both an ARM general purpose processor as well as a TI DSP (digital signal processor). Of course, my focus was on the ARM processor that runs our operating system and it was pretty cool to getting it up and running with little effort.
But after a while, I started wondering what people use this board for. I've been away from embedded development for a few years and man have things changed while I was away. I soon discovered that the main use of this thing is for audio processing. There are some audio jacks as well as a connector to plug in an LCD screen. By programming some audio processing algorithms into the DSP, you could make a pretty cool multimedia device with this thing.
My curiosity then wondered over to how one would program the DSP. If I had a compiler with an integration with the CDT and a debugger that understood how to debug the DSP and that was also integrated with the CDT, then I'd then have a complete multi-core development solution where I could have regular software projects and DSP projects and work on them all at the same time.
It's a very interesting time in the embedded industry with the multi-core phonmenun. I think we'll see a lot of new processors come out that have specialized parts. What I hope to see, and I'm pretty sure it will happen, is different vendors working together integrating their Eclipse-based technologies and unify their development activities into a single workflow for the developer who sees these boards as a single target. That is the true promise of Eclipse!
But after a while, I started wondering what people use this board for. I've been away from embedded development for a few years and man have things changed while I was away. I soon discovered that the main use of this thing is for audio processing. There are some audio jacks as well as a connector to plug in an LCD screen. By programming some audio processing algorithms into the DSP, you could make a pretty cool multimedia device with this thing.
My curiosity then wondered over to how one would program the DSP. If I had a compiler with an integration with the CDT and a debugger that understood how to debug the DSP and that was also integrated with the CDT, then I'd then have a complete multi-core development solution where I could have regular software projects and DSP projects and work on them all at the same time.
It's a very interesting time in the embedded industry with the multi-core phonmenun. I think we'll see a lot of new processors come out that have specialized parts. What I hope to see, and I'm pretty sure it will happen, is different vendors working together integrating their Eclipse-based technologies and unify their development activities into a single workflow for the developer who sees these boards as a single target. That is the true promise of Eclipse!
Sunday, September 03, 2006
Did someone say doughnuts?
I just read Bjorn's note comparing doughnut stores to open source businesses. From what I hear that note probably made more an impact on Canadians that it did others. For some reason, we've rallied around the doughnut as our national past time and our waste-lines are paying the price!
At any rate, I totally agree with his assessment. My spin on it, you can make money by packaging up open source and selling priority support for it, and you can make money by taking open source and customizing it for a small vertical market. Certainly we at QNX are doing the second, taking Eclipse and customizing it to work well for developers writing applications for our operating system.
Another analogy I thought of also has to do with Tim Hortons. After Wendy's (the burger Wendy's) merged with Tim Hortons you started seeing a lot of Wendys and Timmy's co-located in the same restaurant. So the analogy could go that people love doughnuts. So when the come to Tim's and get their fix, they see the Wendy's there and decide to stay for lunch.
So what I've also seen vendors do is package Eclipse as a sort of loss leader to get people interested in their higher margin products. My recent blog on the JTAG vendor Ronetix is an example of that. And I think we'll see a lot more as well as Eclipse becomes ubiquitous (2.27 million users!). Vendors will find they have to play the Eclipse game just to keep up with the Jones.
At any rate, I totally agree with his assessment. My spin on it, you can make money by packaging up open source and selling priority support for it, and you can make money by taking open source and customizing it for a small vertical market. Certainly we at QNX are doing the second, taking Eclipse and customizing it to work well for developers writing applications for our operating system.
Another analogy I thought of also has to do with Tim Hortons. After Wendy's (the burger Wendy's) merged with Tim Hortons you started seeing a lot of Wendys and Timmy's co-located in the same restaurant. So the analogy could go that people love doughnuts. So when the come to Tim's and get their fix, they see the Wendy's there and decide to stay for lunch.
So what I've also seen vendors do is package Eclipse as a sort of loss leader to get people interested in their higher margin products. My recent blog on the JTAG vendor Ronetix is an example of that. And I think we'll see a lot more as well as Eclipse becomes ubiquitous (2.27 million users!). Vendors will find they have to play the Eclipse game just to keep up with the Jones.
Saturday, September 02, 2006
Windows SDK RC1
I've stated a few times now that although the CDT is highly used in the embedded and Linux/Unix markets, I don't think we've conquered the world until the CDT is seen as a valid alternative to Visual Studio for Windows development. Looking at the whole Eclipse ecosystem and all the components available for it, I just think it has a higher value proposition than Visual Studio. At the very least you get a cross platform development environment that you can theoretically build any application with, once all the pieces are there that is.
So I've been working a little on adapting the CDT for Windows development starting with support for Microsoft's C++ compiler. Over the last couple of years they've been shipping it for free, first as a separate toolkit, and now as part of the .Net 2.0 SDK. But, in order to get it working, you had to download a few pieces, including the Platform SDK, and if you wanted to do debugging outside of Visual Studio, the Debugging Tools for Windows. I felt it was pretty complicated to set up, especially for newbies, and of course these pieces aren't redistributable so we couldn't shrink wrap it for you.
But someone pointed me at the new Windows SDK which is part of the Vista program (which is why I was confused since I thought it was a Vista thing only, but it is not). This SDK has recently reached Release Candidate 1. As described in this MSDN TV program (these programs are pretty useful and something we should consider for Eclipse), this new SDK is really a combination of all the pieces you need to build Windows applications, both managed (i.e. .Net) and unmanaged (i.e. native).
What I found interesting was their focus on providing command line tool support for "people who like to work that way". Now, I don't know anyone developing Windows applications that like to work that way. So I read into it that they are really talking about 3rd party IDEs such as the CDT. With the tools provided by this SDK, it should be a pretty simple matter of integrating them as a tool chain just as we do with the gnu tools. Download the SDK, download the CDT and the Windows integration, and you are off and running.
At least that's my hope, which of course will only be successful if it receives community attention. But it sure would be a boost for Eclipse to be seen as the development environment for everyone, without prejudice.
So I've been working a little on adapting the CDT for Windows development starting with support for Microsoft's C++ compiler. Over the last couple of years they've been shipping it for free, first as a separate toolkit, and now as part of the .Net 2.0 SDK. But, in order to get it working, you had to download a few pieces, including the Platform SDK, and if you wanted to do debugging outside of Visual Studio, the Debugging Tools for Windows. I felt it was pretty complicated to set up, especially for newbies, and of course these pieces aren't redistributable so we couldn't shrink wrap it for you.
But someone pointed me at the new Windows SDK which is part of the Vista program (which is why I was confused since I thought it was a Vista thing only, but it is not). This SDK has recently reached Release Candidate 1. As described in this MSDN TV program (these programs are pretty useful and something we should consider for Eclipse), this new SDK is really a combination of all the pieces you need to build Windows applications, both managed (i.e. .Net) and unmanaged (i.e. native).
What I found interesting was their focus on providing command line tool support for "people who like to work that way". Now, I don't know anyone developing Windows applications that like to work that way. So I read into it that they are really talking about 3rd party IDEs such as the CDT. With the tools provided by this SDK, it should be a pretty simple matter of integrating them as a tool chain just as we do with the gnu tools. Download the SDK, download the CDT and the Windows integration, and you are off and running.
At least that's my hope, which of course will only be successful if it receives community attention. But it sure would be a boost for Eclipse to be seen as the development environment for everyone, without prejudice.
Friday, September 01, 2006
CDT everywhere
I continue to be surprised by how many vendors are redistributing the CDT with their products. Lately I've become interested in JTAG hardware debuggers and how to best hook them up to Eclipse for some real low level bit hacking debug workflows. This is something that probably deserves it's own blog entry and it's not really the theme of this story.
At any rate, I ran across a JTAG vendor called Ronetix who appears to build a pretty full featured device similar to the Abatron device I've been playing with lately. So quickly browsing Ronetix web site, I see that they have a Starter Kit that they sell. Low and behold it "Includes Eclipse IDE". Going to the product page for the starter kit I see they have a screenshot of Eclipse in action, and, yes, it is the CDT.
At some point I need to sit down and figure out what is driving the success of the CDT. It certainly fills a need that maybe isn't getting addressed by others, i.e., an IDE for non-Windows development that is extensible and ubiquitous (mind you I'm still keen on CDT for Windows development too). I'll have to ask the 34 developers that are currently registered for the upcoming CDT Contributors Summit why they find the CDT important enough to invest in. No matter the reason, it's been a fun ride and we're looking forward to a great year of collaboration toward CDT 4.0.
At any rate, I ran across a JTAG vendor called Ronetix who appears to build a pretty full featured device similar to the Abatron device I've been playing with lately. So quickly browsing Ronetix web site, I see that they have a Starter Kit that they sell. Low and behold it "Includes Eclipse IDE". Going to the product page for the starter kit I see they have a screenshot of Eclipse in action, and, yes, it is the CDT.
At some point I need to sit down and figure out what is driving the success of the CDT. It certainly fills a need that maybe isn't getting addressed by others, i.e., an IDE for non-Windows development that is extensible and ubiquitous (mind you I'm still keen on CDT for Windows development too). I'll have to ask the 34 developers that are currently registered for the upcoming CDT Contributors Summit why they find the CDT important enough to invest in. No matter the reason, it's been a fun ride and we're looking forward to a great year of collaboration toward CDT 4.0.
Tuesday, August 15, 2006
Greenphone, the open source phone
Everytime I look at the net these last few days, something pretty cool pops up. This time was an announcement from Trolltech, the Qt people, about a new product they plan on releasing in September called Greenphone. This is a GSM/GPRS phone built on an embedded Linux kernel with Trolltech's embedded version of Qt called Qtopia. It is really only sold as a development platform and comes with the necessary SDK.
Now, I think this is a bit different than the open source gaming device I talked about earlier. I don't think Trolltech wants to get into the phone business. In some ways, I think they are just curious about the kind of applications people will build for such a device. And, of course, in the end their goal is to sell more Qtopia licenses to commercial developers.
But I've always wondered what kind of applications make sense on such a small platform. Web browsing when the screen is only 240 pixels wide makes even less sence than browsing the web on a TV. I'll be watching along with Trolltech to see what people will come up with. And, as always, it'll be interesting to see how many people use the CDT to develop for this platform.
Now, I think this is a bit different than the open source gaming device I talked about earlier. I don't think Trolltech wants to get into the phone business. In some ways, I think they are just curious about the kind of applications people will build for such a device. And, of course, in the end their goal is to sell more Qtopia licenses to commercial developers.
But I've always wondered what kind of applications make sense on such a small platform. Web browsing when the screen is only 240 pixels wide makes even less sence than browsing the web on a TV. I'll be watching along with Trolltech to see what people will come up with. And, as always, it'll be interesting to see how many people use the CDT to develop for this platform.
Microsoft XNA Express, maybe they do get it...
Continuing on my vacation game development theme (and yes, I am spending a lot of time with my family and doing things around the house, it's not all geek time ;), Microsoft has just announced that they will be releasing an Express version of their XNA Game Studio for free for Windows development and only $99 for Xbox 360 development. This offering will build on top of their free Visual C# Express IDE and will include some tools for integrating content as well as their XNA Framework game-engine-type-thing. They are really pushing for game development in C# and the CLR, even for the Xbox 360.
As the guy in their XNA Overview video mentioned, the game developer market is pretty small relative to others and selling tools to this market isn't going to be a money maker. What's important to Microsoft is that they help developers as much as they can to get them building content for Microsoft's platforms. It doesn't really matter how much they charge for the tooling and frameworks since they will make their money on the platforms. And with good free offerings, they'll get the kids hooked making games for Microsoft platforms and that will carry that into their careers as professionals.
I am still of the opinion that Eclipse can be an even greater game development environment since it is truly multi-platform. There's no reason why we couldn't build a set of plug-ins that allow developers to target all of the consoles and all of the desktop platforms, including Microsoft's.
Actually there may be one reason, who's going to pay for it? Microsoft is busy devoting itself to Visual Studio, and I haven't seen much interest from the other vendors in contributing to such an open source project (although I know from bug reports and one quick discussion years ago that Sony Playstation group is or at least has used the CDT). It would take some sort of consortium to organize and pay for the project and get involvement from the various players. It could be done and it would be cool for Eclipse but I'm not sure that industry is ready for such co-opetition as much as the embedded industry is.
As the guy in their XNA Overview video mentioned, the game developer market is pretty small relative to others and selling tools to this market isn't going to be a money maker. What's important to Microsoft is that they help developers as much as they can to get them building content for Microsoft's platforms. It doesn't really matter how much they charge for the tooling and frameworks since they will make their money on the platforms. And with good free offerings, they'll get the kids hooked making games for Microsoft platforms and that will carry that into their careers as professionals.
I am still of the opinion that Eclipse can be an even greater game development environment since it is truly multi-platform. There's no reason why we couldn't build a set of plug-ins that allow developers to target all of the consoles and all of the desktop platforms, including Microsoft's.
Actually there may be one reason, who's going to pay for it? Microsoft is busy devoting itself to Visual Studio, and I haven't seen much interest from the other vendors in contributing to such an open source project (although I know from bug reports and one quick discussion years ago that Sony Playstation group is or at least has used the CDT). It would take some sort of consortium to organize and pay for the project and get involvement from the various players. It could be done and it would be cool for Eclipse but I'm not sure that industry is ready for such co-opetition as much as the embedded industry is.
Sunday, August 13, 2006
GP2X - The open source handheld gaming system
Well, I'm on holidays right now but I still like to keep in touch with what' s happening in the industry and still monitor a few Internet rag sites regularly, including my favorite, The Inquirer. Today, I saw in one of their Hardware Roundup postings a link to a review of the GP2X Personal Entertainment System which uses an ARM dual core processor that runs Linux. I've always been interested in game development, so finding a handheld gaming machine that ran Linux sent me off on a trail to find out more.
Well it turns out it's made in Korea by Gamepark Holdings as a follow up to a previous edition handheld which was actually made by another company called Gamepark. Apparently the engineers didn't like what the original company wanted to as a follow up so spun out and made an almost identical company to do it the way they wanted. Interesting inside story there, I'm sure.
Anyway, they advertise this machine as the "Open Source Gaming Device", which I find pretty cool and again fits into the model I've seen over and over again with open source development. The company sells the device (and it's pretty cheap at only about $200), and then fosters an open source community around writing software for it and manages an SDK of open source libraries to support them. They also use a number of the open source Linux apps to build up a suite of multi-media functions for video and audio for users to get started. I haven't seen any analysis about how successful they've been but the community forums seem to be pretty active.
I was a bit disappointed, of course, when I saw that the SDK didn't ship with Eclipse/CDT components, but I was happy to see someone in their community blogging about using the CDT in this environment. Of course, it's a natural fit with CDT's built-in support for gnu development, including cross-development for embedded operating systems such as Linux (and QNX Neutrino ;). I would be quite interested in helping anyone who would like to push to make the CDT a more formally "supported" development environment for this cool little box.
Well it turns out it's made in Korea by Gamepark Holdings as a follow up to a previous edition handheld which was actually made by another company called Gamepark. Apparently the engineers didn't like what the original company wanted to as a follow up so spun out and made an almost identical company to do it the way they wanted. Interesting inside story there, I'm sure.
Anyway, they advertise this machine as the "Open Source Gaming Device", which I find pretty cool and again fits into the model I've seen over and over again with open source development. The company sells the device (and it's pretty cheap at only about $200), and then fosters an open source community around writing software for it and manages an SDK of open source libraries to support them. They also use a number of the open source Linux apps to build up a suite of multi-media functions for video and audio for users to get started. I haven't seen any analysis about how successful they've been but the community forums seem to be pretty active.
I was a bit disappointed, of course, when I saw that the SDK didn't ship with Eclipse/CDT components, but I was happy to see someone in their community blogging about using the CDT in this environment. Of course, it's a natural fit with CDT's built-in support for gnu development, including cross-development for embedded operating systems such as Linux (and QNX Neutrino ;). I would be quite interested in helping anyone who would like to push to make the CDT a more formally "supported" development environment for this cool little box.
Thursday, July 27, 2006
Ballmer: Software is becoming a service
Remember my blog entry on "Software as a Service Industry". Well, I had a chuckle when I read today's ZDNet top story: "Ballmer: Software is becoming a service". See, it's not just me, lol.
I think Microsoft will have a very hard time turning into a services and solutions company. They've spent decades now focusing on building and selling great products. The paradigm shift will certainly confuse their customers for the first little while, if not their employees.
But I see it everyday. Every time a customer comes in with a specific requirement that really only applies to their environment, the stronger I feel that selling software out of a box just won't cut it any more, at least for complex software we tools builders end up making. With the ongoing costs of development and maintenance of that software, it makes more sense spreading out the revenue to match. And it places an even higher importance on the extensibility of that software, just as we see in Eclipse projects today.
So, we'll see how this all pans out, but if Mr. Ballmer says its true, it must be true, :)
I think Microsoft will have a very hard time turning into a services and solutions company. They've spent decades now focusing on building and selling great products. The paradigm shift will certainly confuse their customers for the first little while, if not their employees.
But I see it everyday. Every time a customer comes in with a specific requirement that really only applies to their environment, the stronger I feel that selling software out of a box just won't cut it any more, at least for complex software we tools builders end up making. With the ongoing costs of development and maintenance of that software, it makes more sense spreading out the revenue to match. And it places an even higher importance on the extensibility of that software, just as we see in Eclipse projects today.
So, we'll see how this all pans out, but if Mr. Ballmer says its true, it must be true, :)
Tuesday, July 25, 2006
"AMD to buy ATI"
Now, have you not seen that headline enough yet?
Any time there's a bit of a shakedown in our industry I'm always intrigued. It's not what the industry analysts have to say about it, and certainly not what you read in the press release from the parties involved. It's the story behind the story that piques my interest.
So, I blast through all the reports and try to piece together what is really happening and what it means to our future. For the AMD/ATI thing, the Inquirer yet again puts forth a interesting view on the insider story. Whether what they say is true or not we may never know, but I have seen a lot of rumors posted there that eventually became fact, including the AMD/ATI deal.
I do think that they present a good argument for what is happening, and it seems to be driven by the end of the MHz race (thanks, I can cook a roast in my PC case now, enough already!) and the push for many-multi-core a la Sun's Niagara architecture. AMD also has some pretty cool ideas on how to integrate co-processors that do cool things into their cache-coherent architecture and I'm sure the ATI acquisition will help speed some of these along. And the Inq is pretty sure Intel is working on similar architectures.
So what does that mean for us tools developers? Well, these events really give me more confidence in my prediction that a programming model change is a-coming. Applications will more and more need to take advantage of a multi-threaded environment to get performance gains. We can no longer rely on ever increasing MHz to save us. For C and C++, it means building more multi-threading constructs into the language. Something the Parallel Tools (PTP) people are working on building tooling for APIs like OpenMP.
As I'm sure everyone who's built a multi-threading application (such as Eclipse plug-ins) know, working in this environment is difficult and somewhat unpredictable. The door is wide open for a new set of analysis tools that we can use to scope out when things are going wrong. And I'm sure our experience with such tools in the embedded industry, where we have had to deal with unpredictability of environments for a very long time now, will become of value to everyone.
It's an interesting time again in our industry and we'll all need to keep our eyes on it and be ready to hold on tight as yet another paradigm begins to shift.
Any time there's a bit of a shakedown in our industry I'm always intrigued. It's not what the industry analysts have to say about it, and certainly not what you read in the press release from the parties involved. It's the story behind the story that piques my interest.
So, I blast through all the reports and try to piece together what is really happening and what it means to our future. For the AMD/ATI thing, the Inquirer yet again puts forth a interesting view on the insider story. Whether what they say is true or not we may never know, but I have seen a lot of rumors posted there that eventually became fact, including the AMD/ATI deal.
I do think that they present a good argument for what is happening, and it seems to be driven by the end of the MHz race (thanks, I can cook a roast in my PC case now, enough already!) and the push for many-multi-core a la Sun's Niagara architecture. AMD also has some pretty cool ideas on how to integrate co-processors that do cool things into their cache-coherent architecture and I'm sure the ATI acquisition will help speed some of these along. And the Inq is pretty sure Intel is working on similar architectures.
So what does that mean for us tools developers? Well, these events really give me more confidence in my prediction that a programming model change is a-coming. Applications will more and more need to take advantage of a multi-threaded environment to get performance gains. We can no longer rely on ever increasing MHz to save us. For C and C++, it means building more multi-threading constructs into the language. Something the Parallel Tools (PTP) people are working on building tooling for APIs like OpenMP.
As I'm sure everyone who's built a multi-threading application (such as Eclipse plug-ins) know, working in this environment is difficult and somewhat unpredictable. The door is wide open for a new set of analysis tools that we can use to scope out when things are going wrong. And I'm sure our experience with such tools in the embedded industry, where we have had to deal with unpredictability of environments for a very long time now, will become of value to everyone.
It's an interesting time again in our industry and we'll all need to keep our eyes on it and be ready to hold on tight as yet another paradigm begins to shift.
Tuesday, July 18, 2006
Sustaining Open Source Projects Through Turnover
When you have an open source project such as the CDT that has been around for a while, you end up having to deal with turnover in the people that are working on that project. There are usually a couple of reasons I've seen as to why this happens. Either they have been revectored or promoted to work on something else, or they've left the company that was contributing the resource to a company that doesn't want to invest their resources that way. (As an interesting side note, we have quite a few examples now of people who have switched companies but are still working on the CDT, including yours truely, but that's a topic all on its own...).
In dealing with turnover, I find myself going through a paradigm shift from young project to mature project. In a young project, you are struggling to get people and organizations to contribute to your project. So you find yourself accepting contributions that may not perfectly fit the mould and architecture you are trying to set out, but getting those contributions mean getting people involved and showing the world that your project has momentum and is "the exciting place to be".
But with turnover, without proper documentation, automated tests, and good architectural fit, you start finding that that code that helped get your project going now becomes extra baggage. You start struggling to add new features and you find you need to either replace or simply remove the functionality it provided. Without someone to keep the code alive, it quickly gathers "rust" which starts to spread to places where you are trying to do new work.
So the lesson of the day for me is too keep the long term vision, including a well laid out architecture, for the project front and center from day one. Try to influence new contributors to follow that vision and to manage the churn in that vision so that you can sustain the code as long as you can. This is all basic software engineering school stuff, but it applies to open source projects as much as it does to commercial. And I think I am now of the opinion that having a strong vision like this can serve as much of a draw for contributors as a wide open door does. Or maybe the growth in the CDT lately has given me a bit more confidence. Or maybe its my new rose colored glasses...
In dealing with turnover, I find myself going through a paradigm shift from young project to mature project. In a young project, you are struggling to get people and organizations to contribute to your project. So you find yourself accepting contributions that may not perfectly fit the mould and architecture you are trying to set out, but getting those contributions mean getting people involved and showing the world that your project has momentum and is "the exciting place to be".
But with turnover, without proper documentation, automated tests, and good architectural fit, you start finding that that code that helped get your project going now becomes extra baggage. You start struggling to add new features and you find you need to either replace or simply remove the functionality it provided. Without someone to keep the code alive, it quickly gathers "rust" which starts to spread to places where you are trying to do new work.
So the lesson of the day for me is too keep the long term vision, including a well laid out architecture, for the project front and center from day one. Try to influence new contributors to follow that vision and to manage the churn in that vision so that you can sustain the code as long as you can. This is all basic software engineering school stuff, but it applies to open source projects as much as it does to commercial. And I think I am now of the opinion that having a strong vision like this can serve as much of a draw for contributors as a wide open door does. Or maybe the growth in the CDT lately has given me a bit more confidence. Or maybe its my new rose colored glasses...
Thursday, July 13, 2006
JUnits are my friend
Now I'm sure everyone who writes code in Eclipse is well aware of the power of the JUnit, but I just felt like expressing my appreciation for them right now.
I am in the middle of adding a few constructs to CDT's new index that didn't make it into 3.1.0 and was worried about whether the code I had just written was correct or not. Of course, the CDT is chalk full of JUnit tests for the DOM and other features, but in the mad rush to get the new indexing framework in I cut corners and didn't write any JUnits for it. Instead, I had my new Index View that I used to browse the index and visually verify things. (Now that view was supposed to be hidden since it's not quite complete but thanks to those who found it and have raised bugs against it :).
Well, now that I have a bit more time, I figured I had better make the plunge and start writing some. To my surprise, with the new indexer architecture it was actually pretty easy to programatically create a project, import some files from my test plugin into the project and run the indexer over them. I was then able to easily write some code to search the index and make sure everything was there that was supposed to be there.
Alas, of course, it showed me that it didn't and I have to now go and find out why that reference to my enum didn't get added. In the end, writing JUnits will have saved me more time than it took to write them. No more excuses. And thanks to Mr. Joe Unit for saving the day yet again!
I am in the middle of adding a few constructs to CDT's new index that didn't make it into 3.1.0 and was worried about whether the code I had just written was correct or not. Of course, the CDT is chalk full of JUnit tests for the DOM and other features, but in the mad rush to get the new indexing framework in I cut corners and didn't write any JUnits for it. Instead, I had my new Index View that I used to browse the index and visually verify things. (Now that view was supposed to be hidden since it's not quite complete but thanks to those who found it and have raised bugs against it :).
Well, now that I have a bit more time, I figured I had better make the plunge and start writing some. To my surprise, with the new indexer architecture it was actually pretty easy to programatically create a project, import some files from my test plugin into the project and run the indexer over them. I was then able to easily write some code to search the index and make sure everything was there that was supposed to be there.
Alas, of course, it showed me that it didn't and I have to now go and find out why that reference to my enum didn't get added. In the end, writing JUnits will have saved me more time than it took to write them. No more excuses. And thanks to Mr. Joe Unit for saving the day yet again!
Friday, July 07, 2006
How many engineers does it take to turn a CDT?
We had our regular monthly CDT contributors call yesterday. These are usually low key things where we quickly touch base, talk about release planning and the occasional technical issue. We've had calls that have lasted only 20 minutes. Sometimes they'll stretch to the whole hour if someone brings up a technical issue and we talk slow enough about it.
This months meeting struck me a little differently though. First of all, I was able to get a full head count and we had 21 people on the call. Of those people, I'd say 16 of them were people that have contributed code or are planning on contributing code. I also know that there were 3 or 4 such people that weren't on the call. I found that I had to cut off discussions and table them for future meetings because we were going to run past the hour we have allocated.
When I joined QNX last year and was handed leadership of the CDT, I remember mentioning to Mike M. that we had a hard time attracting contributors. At the time we really only had 5 or so people actively contributing. We knew the interest in the CDT was high and just needed to find a way to turn at least some of that interest into contributions so that we could continue to grow the CDT.
I'd have to say now we are finally getting the attention that the CDT needs. With contributors counting around 20 and a lot of people out in the community testing and raising bugs, I'm starting to feel like we can actually reach the goals I had personally for the CDT and go way beyond. We have a bright collection of talent now and they are all doing great things. Even over the last week as we opened up CDT 4.0 development, there have been some cool enhancements going in (like common navigator support) and I can't wait to try our first weekly build on Monday.
But the thing that really struck after the meeting was that I am going to be a busy man. With this many people contributing to the CDT, it's going to be a great challenge to make sure we don't run over each other. Communication is going to be key and I will take on the responsibility to make sure this communication happens and to facilitate the resolution of any conflicts that may arise. It's going to be a great run, though, and I can't wait to see what we accomplish as a team.
This months meeting struck me a little differently though. First of all, I was able to get a full head count and we had 21 people on the call. Of those people, I'd say 16 of them were people that have contributed code or are planning on contributing code. I also know that there were 3 or 4 such people that weren't on the call. I found that I had to cut off discussions and table them for future meetings because we were going to run past the hour we have allocated.
When I joined QNX last year and was handed leadership of the CDT, I remember mentioning to Mike M. that we had a hard time attracting contributors. At the time we really only had 5 or so people actively contributing. We knew the interest in the CDT was high and just needed to find a way to turn at least some of that interest into contributions so that we could continue to grow the CDT.
I'd have to say now we are finally getting the attention that the CDT needs. With contributors counting around 20 and a lot of people out in the community testing and raising bugs, I'm starting to feel like we can actually reach the goals I had personally for the CDT and go way beyond. We have a bright collection of talent now and they are all doing great things. Even over the last week as we opened up CDT 4.0 development, there have been some cool enhancements going in (like common navigator support) and I can't wait to try our first weekly build on Monday.
But the thing that really struck after the meeting was that I am going to be a busy man. With this many people contributing to the CDT, it's going to be a great challenge to make sure we don't run over each other. Communication is going to be key and I will take on the responsibility to make sure this communication happens and to facilitate the resolution of any conflicts that may arise. It's going to be a great run, though, and I can't wait to see what we accomplish as a team.
Saturday, July 01, 2006
How many engineers does it take to push a button?
One of the benefits of being located in Ottawa is that I get to rub shoulders with the who's who of Eclipse at interesting times. One of those times happened again yesterday as the button for releasing Callisto was pushed. Now, it wasn't really a button and it took about a half an hour from the time Denis started when the mirrors were ready until all the web sites were updated and we could download Callisto. But it was a moment.
It was particularly underwelming for the newspaper guy who was there, but I did get a chance to interview with him and hopefully sent him off with something interesting to write other than a bunch of computer geeks hitting refresh until we could see the magic "3.2" appear. But such is our life.
I came away very impressed with the work that Denis and his team do. Sometimes we forget how complex an operation that a site such as eclipse.org is. But it takes a team of dedicated professionals to pull it of and my hats off to Denis, Matt, and Nathan for pulling off one of the most challenging releases you'll see in this industry. And it was pretty cool to be in the nerve center as it was happening. Not to mention, they were all using Eclipse to managed the site which was also cool.
It was particularly underwelming for the newspaper guy who was there, but I did get a chance to interview with him and hopefully sent him off with something interesting to write other than a bunch of computer geeks hitting refresh until we could see the magic "3.2" appear. But such is our life.
I came away very impressed with the work that Denis and his team do. Sometimes we forget how complex an operation that a site such as eclipse.org is. But it takes a team of dedicated professionals to pull it of and my hats off to Denis, Matt, and Nathan for pulling off one of the most challenging releases you'll see in this industry. And it was pretty cool to be in the nerve center as it was happening. Not to mention, they were all using Eclipse to managed the site which was also cool.
Tuesday, June 27, 2006
What does Callisto mean to the CDT?
I've been lucky enough to be involved with the CDT since the day QNX proposed it to world back in 2002. It's been a very interesting journey. In the early days, the CDT was almost a side project at Eclipse where a few vendors had a dream of building a great C/C++ IDE and tried desparately with the few resources we had to reach the bar that the JDT guys continuously raised and continue to raise on us. But in those days the people working on the CDT didn't have a whole lot to do with the other projects at Eclipse.
Callisto has changed that in a lot of ways. First of all, just delivering at the same time as the other 9 projects opens up opportunities for working with them to bring their features to the C/C++ world. I've had discussions with TPTP with thier static analysis features built on top of the CDT. It's still small but a start. And others will arise in the future I'm sure. But the biggest benefit was our tighter schedule with the platform where we became early adopters and were able to get bugs fixed before having to wait for a maintenance release. And the platform team was very eager to help us out.
For the CDT, even the fact that we knew about 8 months in advance when our delivery date was going to be was a huge benefit. Until then, the release dates for the CDT were at the whim of the vendors providing committers to the CDT as we tried to match vendor release plans with CDT release plans. It made feature planning very difficult (we even had a 4 month cycle once!). And we look forward to the next release in a years time which will give us the opportunity to put forward a great program and make the major version jump to CDT 4.0.
For me personally, though, it was just the opportunity to work together with the 9 other project leads and Bjorn, Ward and Ian from the EMO. These are great people and it was a pleasure to work with them towards this great common goal that even Mike said wasn't possible. We proved them all wrong and have started a new era at Eclipse. And I hope you all enjoy the fruits of our labour, Callisto!
Callisto has changed that in a lot of ways. First of all, just delivering at the same time as the other 9 projects opens up opportunities for working with them to bring their features to the C/C++ world. I've had discussions with TPTP with thier static analysis features built on top of the CDT. It's still small but a start. And others will arise in the future I'm sure. But the biggest benefit was our tighter schedule with the platform where we became early adopters and were able to get bugs fixed before having to wait for a maintenance release. And the platform team was very eager to help us out.
For the CDT, even the fact that we knew about 8 months in advance when our delivery date was going to be was a huge benefit. Until then, the release dates for the CDT were at the whim of the vendors providing committers to the CDT as we tried to match vendor release plans with CDT release plans. It made feature planning very difficult (we even had a 4 month cycle once!). And we look forward to the next release in a years time which will give us the opportunity to put forward a great program and make the major version jump to CDT 4.0.
For me personally, though, it was just the opportunity to work together with the 9 other project leads and Bjorn, Ward and Ian from the EMO. These are great people and it was a pleasure to work with them towards this great common goal that even Mike said wasn't possible. We proved them all wrong and have started a new era at Eclipse. And I hope you all enjoy the fruits of our labour, Callisto!
Wednesday, June 21, 2006
Can't talk now, coding...
I've been pretty quiet lately with the blogging. The main reason is that I've been working certain parts of my body off as I try to implement a new indexing architecture for the CDT. There is a lot of good news and a little bad news with this project. The good news is that I can now index Mozilla in 14 minutes on my laptop! In CDT 3.0, that took around 50 minutes, and improvement of around 75%. As well, as you change files, you hardly notice the indexer running were as it could take up to 12 seconds to deal with the change in 3.0. I almost fell over when I got the first timing at 14. Followed shortly by a dance of joy.
How did I do it? Well I took a hint from the precompiled header feature that most compilers are starting to support. As I'm indexing, and potentially other parse activities as well, I skip over header files that I have already parsed previously and get the symbol information from the index. This required building a more structured database for the index as opposed to the string based flat table in 3.0. It turns out to be much faster since parsing C and especially C++ is a lot slower than the database lookup. This is why incremental times are so fast. I just didn't realize the whole reindex operation would be so fast as well (my target was 20 minutes for Mozilla).
The bad news, is that while it is incredibly faster, it does suffer from being young. There is less captured in the index than there was in 3.0, for Mozilla about 20% less symbols. So searching for certain things aren't going to get you everything you were looking for. But I have been able to capture the high runners. More bad news, is that we are getting spurious StackOverflow errors because not all information is in the index and some of the algorithms we have for symbol resolution weren't prepared for that. So as a result, the new index is only used for Search actions where we can recover gracefully and not for content assist and open declaration.
But back to the good news, as we work more on improving the contents of the index I'll be able to direct all parser operations to it and make the CDT much more responsive for all operations (including my baby - content assist). And even as it is today, there is enough information there for the majority of workflows. Even the field engineers at QNX are extremely happy with it and these are the front line guys who need to make sure their customers are happy. More good news is that I'm getting more help with the indexer, both testing and coding. It's tough to do this as a one man show and I am appreciating all the help I'm getting from the community.
With the new indexing framework in place in CDT 3.1, the opportunities for exciting new features is wide open. And one of the major objections to using the CDT on large complex projects has been eased greatly. It's time to get the message out, now that I can lift my head away from the code!
How did I do it? Well I took a hint from the precompiled header feature that most compilers are starting to support. As I'm indexing, and potentially other parse activities as well, I skip over header files that I have already parsed previously and get the symbol information from the index. This required building a more structured database for the index as opposed to the string based flat table in 3.0. It turns out to be much faster since parsing C and especially C++ is a lot slower than the database lookup. This is why incremental times are so fast. I just didn't realize the whole reindex operation would be so fast as well (my target was 20 minutes for Mozilla).
The bad news, is that while it is incredibly faster, it does suffer from being young. There is less captured in the index than there was in 3.0, for Mozilla about 20% less symbols. So searching for certain things aren't going to get you everything you were looking for. But I have been able to capture the high runners. More bad news, is that we are getting spurious StackOverflow errors because not all information is in the index and some of the algorithms we have for symbol resolution weren't prepared for that. So as a result, the new index is only used for Search actions where we can recover gracefully and not for content assist and open declaration.
But back to the good news, as we work more on improving the contents of the index I'll be able to direct all parser operations to it and make the CDT much more responsive for all operations (including my baby - content assist). And even as it is today, there is enough information there for the majority of workflows. Even the field engineers at QNX are extremely happy with it and these are the front line guys who need to make sure their customers are happy. More good news is that I'm getting more help with the indexer, both testing and coding. It's tough to do this as a one man show and I am appreciating all the help I'm getting from the community.
With the new indexing framework in place in CDT 3.1, the opportunities for exciting new features is wide open. And one of the major objections to using the CDT on large complex projects has been eased greatly. It's time to get the message out, now that I can lift my head away from the code!
Monday, June 05, 2006
Software as a Service Industry
Curt Schacker, apparently a veteran of the embedded software industry (well, his resume looks good anyway), has an interesting article on LinuxDevices.com on how he sees the state of the embedded software industry. His contention is that we've been been trying to shove a giant square peg in a giant round hole (his words, not mine), and that the embedded software industry is really a service industry and isn't well served by off the shelf software.
Now mind you Curt is a co-founder of, you guessed it, an embedded services company. But I have definitely seen the trend, especially in the tools area. It is really hard to sell software development tools in a box. Every customer seems to have different processes, different configuration management systems, build systems, coding standards, you name it. It is very difficult to build a suite of tools to satisfy them all.
The biggest success stories I've been a part of in this industry is when we sell the customer a box, but then follow it up with intensive support or custom development to make the software in the box work best for them. There's nothing worse, for me anyway, to have a customer who bought my box, but then let it sit on the shelf because it didn't really meet his needs. It's not so good for the reputation and future sales.
This is where programs like Eclipse really play into the business needs of software vendors. First, by sharing the development costs with other companies, our boxes are cheaper to produce. However, with Eclipse's extensibility and customizability, it is easier to take those products and customize them for individual customer's needs. Selling services may be more difficult and, as Curt mentions, doesn't provide the multiples that products do, but it might be the right approach that customers have always wanted and the best road to profitibility for software vendors.
Now mind you Curt is a co-founder of, you guessed it, an embedded services company. But I have definitely seen the trend, especially in the tools area. It is really hard to sell software development tools in a box. Every customer seems to have different processes, different configuration management systems, build systems, coding standards, you name it. It is very difficult to build a suite of tools to satisfy them all.
The biggest success stories I've been a part of in this industry is when we sell the customer a box, but then follow it up with intensive support or custom development to make the software in the box work best for them. There's nothing worse, for me anyway, to have a customer who bought my box, but then let it sit on the shelf because it didn't really meet his needs. It's not so good for the reputation and future sales.
This is where programs like Eclipse really play into the business needs of software vendors. First, by sharing the development costs with other companies, our boxes are cheaper to produce. However, with Eclipse's extensibility and customizability, it is easier to take those products and customize them for individual customer's needs. Selling services may be more difficult and, as Curt mentions, doesn't provide the multiples that products do, but it might be the right approach that customers have always wanted and the best road to profitibility for software vendors.
Sunday, June 04, 2006
Web server on your phone?
One of my "too many" interests in the computing industry is how to best serve up web content from embedded devices. The main use I see for such a capability is to allow maintenance personnel an convenient and standard way at getting at state and configuration information from the devices under their care. You see it very commonly used for configuring home routers such as my Linksys.
If you were at the CDT BOF at EclipseCon 2005, you would have seen a demo I gave of using gsoap to do this kind of thing. Since then, I've come to the conclusion that SOAP and related protocols are oversolving the problem. You can do what I was trying to do with simple http GETs. And with the coming out of AJAX to provide more interactive content with web pages using simple http requests, this really starts to look like the right architecture.
The problem I had was how to you integrate an http server with your embedded application. There are a few httpd library packages around but none of them appear to have enough momentum behind them to take the industry by storm. I had considered making my own but going through the http spec I quickly came to the conclusion that it would take a little more work than I wanted to soak into it at this point.
Then I ran across Nokia's Raccoon project where they've ported Apache to the Symbian OS that they use in their cell phones. My head almost fell off. I thought Apache was this big monolithic web server that is driving the bulk of the web servers on the internet, big iron types. Could Apache be made small enough to fit into embedded devices. Nokia seems to have been able to do it. And looking at Apache's modular architecture, it looks like you could write some cool modules that can interact with the software on the device without having to resort to the slow and clunky CGI interface. Very cool, and something I need to look into more.
If you were at the CDT BOF at EclipseCon 2005, you would have seen a demo I gave of using gsoap to do this kind of thing. Since then, I've come to the conclusion that SOAP and related protocols are oversolving the problem. You can do what I was trying to do with simple http GETs. And with the coming out of AJAX to provide more interactive content with web pages using simple http requests, this really starts to look like the right architecture.
The problem I had was how to you integrate an http server with your embedded application. There are a few httpd library packages around but none of them appear to have enough momentum behind them to take the industry by storm. I had considered making my own but going through the http spec I quickly came to the conclusion that it would take a little more work than I wanted to soak into it at this point.
Then I ran across Nokia's Raccoon project where they've ported Apache to the Symbian OS that they use in their cell phones. My head almost fell off. I thought Apache was this big monolithic web server that is driving the bulk of the web servers on the internet, big iron types. Could Apache be made small enough to fit into embedded devices. Nokia seems to have been able to do it. And looking at Apache's modular architecture, it looks like you could write some cool modules that can interact with the software on the device without having to resort to the slow and clunky CGI interface. Very cool, and something I need to look into more.
Thursday, May 18, 2006
GWT, Another Turning Point?
I still remember the first time I found out that I could drag the map in Google Maps to pan around the point I had searched for. The funny thing is that someone had to point out to me that you could do that. It wasn't at all obvious to me at first and I really wondered how the hell they did that. Was it some scary voodoo magic?
Of course, now I know. It all has to do with sending requests off to the server using JavaScript and updating the HTML on the page on the fly in what we now know as AJAX. It works in pretty much any browser that supports JavaScript and it lets you create some pretty complex front ends without having to learn MFC or Swing (and, no, this isn't a plug for people to read my page, I hate Swing for all the reasons Phillip does and won't mention it again, much) or RCP for that matter. And, being in the embedded software industry, I think this is still a great way for embedded devices to get quick remote GUI interfaces.
So, when Mike pointed out the new Google Web Toolkit, GWT, I was intrigued. Taking a look at their pages, it was reminiscent of what Microsoft has done with Visual Studio and MFC as a toolkit for Windows and what we're doing with QNX Momentics. Build a nice IDE and a good framework and developers will come. GWT turns out to be something similar for AJAX applications and uses Eclipse for the IDE.
The real question I have is, why is Google doing this? Sure they got a ton of money with their IPO, but surely this isn't charity work for us interested in building web apps that don't have anything to do with Google. But they are making a change in the industry where developers working on client software need to care more about which browser your users are going to use rather than the operating system. I think this will open the door for others to jump in and take some of the client OS share away from Microsoft. But that still leaves the question, why does Google want to do that? hmmmm....
Of course, now I know. It all has to do with sending requests off to the server using JavaScript and updating the HTML on the page on the fly in what we now know as AJAX. It works in pretty much any browser that supports JavaScript and it lets you create some pretty complex front ends without having to learn MFC or Swing (and, no, this isn't a plug for people to read my page, I hate Swing for all the reasons Phillip does and won't mention it again, much) or RCP for that matter. And, being in the embedded software industry, I think this is still a great way for embedded devices to get quick remote GUI interfaces.
So, when Mike pointed out the new Google Web Toolkit, GWT, I was intrigued. Taking a look at their pages, it was reminiscent of what Microsoft has done with Visual Studio and MFC as a toolkit for Windows and what we're doing with QNX Momentics. Build a nice IDE and a good framework and developers will come. GWT turns out to be something similar for AJAX applications and uses Eclipse for the IDE.
The real question I have is, why is Google doing this? Sure they got a ton of money with their IPO, but surely this isn't charity work for us interested in building web apps that don't have anything to do with Google. But they are making a change in the industry where developers working on client software need to care more about which browser your users are going to use rather than the operating system. I think this will open the door for others to jump in and take some of the client OS share away from Microsoft. But that still leaves the question, why does Google want to do that? hmmmm....
Sunday, May 14, 2006
I Hate Typing!
Those who have worked with me in the past know I have a favorite mantra that drives a lot of what I do: "I hate typing!" Now, after 20+ years working on computers, I can type pretty fast. But I can still think faster than I can type and that frustrates me at times. Mind you sometimes the extra sober thought between keystrokes has saved me from implementing the odd bad idea.
But this is the main driver for me when building tools. I find that the best tools are those that allow me to express my ideas by the fastest means possible. I have spent a lot of my tooling career building code generators for visual modeling tools, especially state machines. I've generated a lot of code relative to the number of user gestures. Customers loved it and I think it is still the best example of getting ideas into your software faster than you can type in the code. Hopefully as the Eclipse modeling tools grow, we'll see more of this.
In the meantime, we are still pretty much left to probably the most imporant tools that we have in our tool chest, the programming languages. People who work with me also know that "I hate Java". Yes, it's an evil irony that I have spent the last 5 years being a Java programmer. As the JDT adds more accelerators, like more complicated content assists and refactoring, I hate Java less. But there are just some concepts that I find hard to express in Java, like complicated memory mapped binary files like I have with the PDOM, the CDT's new index, and I just find I have to do a lot of typing to do what I need to do.
As I learn more about C#, the more I realize that it comes the closest to the way I want to work. It has the best of Java such as garbage collection and anonymous functions (anonymous classes in Java). Plus, it gives you the best of C++, such as stack allocated structs and operator overloading. And, if you don't feel like playing it "safe" you can actually do pointers and take more control over your memory. I have no immediate need to use C# for my work time, so learning it has to be relegated to hobby time, which I have precious little of these days. But it would be interesting to see how fast I can get my ideas into code without typing so much.
But this is the main driver for me when building tools. I find that the best tools are those that allow me to express my ideas by the fastest means possible. I have spent a lot of my tooling career building code generators for visual modeling tools, especially state machines. I've generated a lot of code relative to the number of user gestures. Customers loved it and I think it is still the best example of getting ideas into your software faster than you can type in the code. Hopefully as the Eclipse modeling tools grow, we'll see more of this.
In the meantime, we are still pretty much left to probably the most imporant tools that we have in our tool chest, the programming languages. People who work with me also know that "I hate Java". Yes, it's an evil irony that I have spent the last 5 years being a Java programmer. As the JDT adds more accelerators, like more complicated content assists and refactoring, I hate Java less. But there are just some concepts that I find hard to express in Java, like complicated memory mapped binary files like I have with the PDOM, the CDT's new index, and I just find I have to do a lot of typing to do what I need to do.
As I learn more about C#, the more I realize that it comes the closest to the way I want to work. It has the best of Java such as garbage collection and anonymous functions (anonymous classes in Java). Plus, it gives you the best of C++, such as stack allocated structs and operator overloading. And, if you don't feel like playing it "safe" you can actually do pointers and take more control over your memory. I have no immediate need to use C# for my work time, so learning it has to be relegated to hobby time, which I have precious little of these days. But it would be interesting to see how fast I can get my ideas into code without typing so much.
Thursday, May 11, 2006
Tracking Language Trends
After reading Ian's post on Eclipse language support, I had to check out where he got the ranking information. It is provided by TIOBE Software's Programming Community Index. I'm sure you can debate the merits of this index and the fact it is based on hits from the top three internet search engines, but as with all polls, it is pretty interesting to look at.
I'm pleased to see that, despite continuous predictions of C and C++'s demise, they still still #2 and #3 in this index, "eclipsed" only by Java. It is also interesting to note that C is still way ahead of C++. This is something we are seeing in the embedded space, where C++ is still seen as too expensive in size and performance for devices. For very small footprints, this is actually true, but the amount of memory and CPU power available in embedded devices continues to grow and this is becoming more a cultural issue than a technical one.
I was surprised to see PHP listed so highly, at #4. I guess I'm still suffering from my brainwashing that James Gosling did on me that Java was the only language for internet applications. The rise of PHP is probably killing Perl, which isn't surprising as I consider Perl one of those "write-only" languages. I was somewhat disappointed to see the .Net languages so low, but then I'd bet that their query on Basic is picking up VB.Net unintentionally, which if true puts it on par with PHP.
I've been a huge fan of programming languages and paradigms since my university days many moons ago, which is probably why I'm so passionate about the CDT and why I keep pushing the CDT to make sure the it can handle multiple languages. To a large extent, we treat C and C++ as separate languages, so adding a new one shouldn't be that hard. We have Photran team exercising that with Fortran (which failed to make the top 20 but sits at #21, stay tuned for it's renewed meteoric rise!). I also have a hook on a student in Google's Summer of Code that is interested in doing C# and VB.Net for Mono.
Being compiled languages, they benefit mainly in the build and debug side of things, but I'm hoping to extend it to the editor and indexing side with CDT's code models. IDE generation is one thing, but to be fully functional environments for complex industrial strength languages with all the wizbang features of the JDT, you need a solid extensible framework that we are hoping to provide with the CDT. It's all really cool stuff, well for me anyway, and, of course, helps build the CDT community by expanding it's horizons.
I'm pleased to see that, despite continuous predictions of C and C++'s demise, they still still #2 and #3 in this index, "eclipsed" only by Java. It is also interesting to note that C is still way ahead of C++. This is something we are seeing in the embedded space, where C++ is still seen as too expensive in size and performance for devices. For very small footprints, this is actually true, but the amount of memory and CPU power available in embedded devices continues to grow and this is becoming more a cultural issue than a technical one.
I was surprised to see PHP listed so highly, at #4. I guess I'm still suffering from my brainwashing that James Gosling did on me that Java was the only language for internet applications. The rise of PHP is probably killing Perl, which isn't surprising as I consider Perl one of those "write-only" languages. I was somewhat disappointed to see the .Net languages so low, but then I'd bet that their query on Basic is picking up VB.Net unintentionally, which if true puts it on par with PHP.
I've been a huge fan of programming languages and paradigms since my university days many moons ago, which is probably why I'm so passionate about the CDT and why I keep pushing the CDT to make sure the it can handle multiple languages. To a large extent, we treat C and C++ as separate languages, so adding a new one shouldn't be that hard. We have Photran team exercising that with Fortran (which failed to make the top 20 but sits at #21, stay tuned for it's renewed meteoric rise!). I also have a hook on a student in Google's Summer of Code that is interested in doing C# and VB.Net for Mono.
Being compiled languages, they benefit mainly in the build and debug side of things, but I'm hoping to extend it to the editor and indexing side with CDT's code models. IDE generation is one thing, but to be fully functional environments for complex industrial strength languages with all the wizbang features of the JDT, you need a solid extensible framework that we are hoping to provide with the CDT. It's all really cool stuff, well for me anyway, and, of course, helps build the CDT community by expanding it's horizons.
Monday, May 08, 2006
ANTLR v3, Everyone's Parser Generator
And now for something, completely different...
I've been toying with the idea of expanding my desires to better support Windows development to better supporting .Net development. There's lot of interesting things happening there not just on the Windows side, but with Linux as well with Mono. Not to mention, there is a Java VM implementation that runs on the Command Language Runtime (CLR) called IKVM. The IKVM is interesting because I just tried running Eclipse 3.2RC3 on it and, aside from a few ClassNotFound and IllegalArgument exceptions, things ran fine albeit a little slow at times. That raises the specter of writing Eclipse plug-ins in C#, but more on that some other time.
So, of course, looking for a break from the mad dash to finishing CDT 3.1, I started writing a parser for C#. I've been dying to try out the new version of ANTLR v3, which is in early access mode of the famous open source parser generator written by Terence Parr. The biggest plus is that it promises to support LL(*) grammars, i.e. almost any grammar that isn't left recursive or ambiguous. I've spent plenty of time trying to get ANTLR to accept modern complicated grammars such as C++ and Ada, but gave up after a little while because of all the effort needed to refactoring the grammar to meet LL(K) restrictions. (For the curious, LL pretty much means top-down parser which is generally how you'd hand write one, like we did with the CDT's C/C++ parsers, and the thing in the parens is the amount of lookahead used to make decisions on which path to take. ANTLR v3 supports infinite lookahead, previously thought of as too expensive but Terence is proving us all wrong).
Well, I've just started and my initial report is "Wow!". Every time I enter a rule that used to give previous versions of ANTLR as well as LALR parser generators such as yacc and bison fits, I get no errors. And looking at the code that gets generated, it looks decently efficient, using a special algorithm to make the lookahead efficient. Hell, at this rate, all I have to do is type the grammar as it's given in the C# language spec and I'm done. Well, not really because the grammar as it is found there has left recursion and has ambiguities, but all these can be fixed with fairly simple refactoring.
I can't wait for Terence's beta in the summer, when hopefully he'll have some documentation so I don't have to guess at the syntax based on the examples. Also, he is changing the licensing of ANTLR and has rewritten the code so that he owns the copyright, which all means that ANTLR should be acceptable for inclusion with Eclipse projects (hopefully, cross my fingers). All of which should mean that it'll be easier to write parsers for new languages that we want to support with the CDT's code model, DOM, and indexing framework. Kudo's to Terence! Now back to CDT 3.1...
I've been toying with the idea of expanding my desires to better support Windows development to better supporting .Net development. There's lot of interesting things happening there not just on the Windows side, but with Linux as well with Mono. Not to mention, there is a Java VM implementation that runs on the Command Language Runtime (CLR) called IKVM. The IKVM is interesting because I just tried running Eclipse 3.2RC3 on it and, aside from a few ClassNotFound and IllegalArgument exceptions, things ran fine albeit a little slow at times. That raises the specter of writing Eclipse plug-ins in C#, but more on that some other time.
So, of course, looking for a break from the mad dash to finishing CDT 3.1, I started writing a parser for C#. I've been dying to try out the new version of ANTLR v3, which is in early access mode of the famous open source parser generator written by Terence Parr. The biggest plus is that it promises to support LL(*) grammars, i.e. almost any grammar that isn't left recursive or ambiguous. I've spent plenty of time trying to get ANTLR to accept modern complicated grammars such as C++ and Ada, but gave up after a little while because of all the effort needed to refactoring the grammar to meet LL(K) restrictions. (For the curious, LL pretty much means top-down parser which is generally how you'd hand write one, like we did with the CDT's C/C++ parsers, and the thing in the parens is the amount of lookahead used to make decisions on which path to take. ANTLR v3 supports infinite lookahead, previously thought of as too expensive but Terence is proving us all wrong).
Well, I've just started and my initial report is "Wow!". Every time I enter a rule that used to give previous versions of ANTLR as well as LALR parser generators such as yacc and bison fits, I get no errors. And looking at the code that gets generated, it looks decently efficient, using a special algorithm to make the lookahead efficient. Hell, at this rate, all I have to do is type the grammar as it's given in the C# language spec and I'm done. Well, not really because the grammar as it is found there has left recursion and has ambiguities, but all these can be fixed with fairly simple refactoring.
I can't wait for Terence's beta in the summer, when hopefully he'll have some documentation so I don't have to guess at the syntax based on the examples. Also, he is changing the licensing of ANTLR and has rewritten the code so that he owns the copyright, which all means that ANTLR should be acceptable for inclusion with Eclipse projects (hopefully, cross my fingers). All of which should mean that it'll be easier to write parsers for new languages that we want to support with the CDT's code model, DOM, and indexing framework. Kudo's to Terence! Now back to CDT 3.1...
Sounds familiar
We've had some good discussions lately in the Planet Eclipse blogosphere about whether Eclipse project should be focusing on the concerns of users or building a platform for ISVs to add their value. In the end, I conclude that you need to balance both for the sake of growth in your community. Unfortunately, though, or fortunately as the case may be, Eclipse projects are staffed almost exclusively by Eclipse members who fit more in the ISV camp. These guys need to justify their investment in the Eclipse on the bottom line. It's the nature of the business, and there's nothing wrong with that, since without it we wouldn't have the great Eclipse that we have today.
Of course, this isn't just an Eclipse thing. A lot of high quality open source projects are staffed by ISVs and the concerns are the same. Recently, chief Linux maintainer Andrew Morton has been frustrated by the focus of his development community as well. Most of these developers are employed by OEM-types who support Linux running on their platform. But a lot of users who are using "unsupported" platforms are raising bugs that these platforms aren't working anymore. How do you get your developers to focus on something their real bosses don't care about?
Well, this is a big challenge for all open source project leads. Developers contributing to open source aren't under contractual obligation to do anything. What they do work on is generally based on the needs of their employers. Yes, that's a pessimistic view because everyone I that I work with in open source is very concerned about all users of their stuff, no just the users that their bosses care about. But when tough decisions need to be made, you can be sure that the general user loses out.
So is that all there is too it. I don't think so. One thing that I think ISVs contributing to open source often don't think of is that, you're "open". Everyone can see what you're doing. Everyone can find out that your contributing to it. And if the general community starts making a fuss, especially in the media, that the open source software that they are freely downloading doesn't work for them, that can reflect badly on the open source project. That could lead to negative publicity that your customers get to see, who may in turn start questioning the quality of the product your are trying to sell them.
As I said, the ISV's need to focus on their bottom line when they consider how to invest in open source. But they need to take everything into consideration, not just the direct needs of their product, but to make sure that the integrity of the project they are building their house of cards on stays on the good side.
Of course, this isn't just an Eclipse thing. A lot of high quality open source projects are staffed by ISVs and the concerns are the same. Recently, chief Linux maintainer Andrew Morton has been frustrated by the focus of his development community as well. Most of these developers are employed by OEM-types who support Linux running on their platform. But a lot of users who are using "unsupported" platforms are raising bugs that these platforms aren't working anymore. How do you get your developers to focus on something their real bosses don't care about?
Well, this is a big challenge for all open source project leads. Developers contributing to open source aren't under contractual obligation to do anything. What they do work on is generally based on the needs of their employers. Yes, that's a pessimistic view because everyone I that I work with in open source is very concerned about all users of their stuff, no just the users that their bosses care about. But when tough decisions need to be made, you can be sure that the general user loses out.
So is that all there is too it. I don't think so. One thing that I think ISVs contributing to open source often don't think of is that, you're "open". Everyone can see what you're doing. Everyone can find out that your contributing to it. And if the general community starts making a fuss, especially in the media, that the open source software that they are freely downloading doesn't work for them, that can reflect badly on the open source project. That could lead to negative publicity that your customers get to see, who may in turn start questioning the quality of the product your are trying to sell them.
As I said, the ISV's need to focus on their bottom line when they consider how to invest in open source. But they need to take everything into consideration, not just the direct needs of their product, but to make sure that the integrity of the project they are building their house of cards on stays on the good side.
Wednesday, May 03, 2006
Lego Open Source
Back when I was working at ObjecTime, a real-time object oriented modeling tool vendor, we tried to convince our boss that we need to port our code generation tools to support the then new Lego Mindstorms. He thought it was a great idea but couldn't come up with the business case for it. C'est la vie. That was almost ten years ago and I had almost forgotten about it.
If you're a regular Slashdot reader you'd have seen the note about Lego open sourcing the firmware for their new Mindstorms NXT brick. Well that hooked my attention. Investigating further, I found out that this little box was a pretty powerful little unit with a 32-bit ARM7 processor with 256K Flash, 64K RAM, a second 8-bit microcontroller which I assume drives the sensors and motor controls. It has a USB for connecting the brick to a computer for downloading new firmware and programs. It also has a Bluetooth interface so you can hook up to other devices, or even cooler, have multiple bricks talking to each other.
So do you get the sense this is on my Christmas list? You betcha. Can I justify it. Well not really. But it would be very cool to have Eclipse support for this target: CDT to work on the firmware and programs in C and DSDP components for target management. Maybe I can convince Ian and Mike that they need a cool demo for next year's Eclipse booth at ESC. hmmm.
If you're a regular Slashdot reader you'd have seen the note about Lego open sourcing the firmware for their new Mindstorms NXT brick. Well that hooked my attention. Investigating further, I found out that this little box was a pretty powerful little unit with a 32-bit ARM7 processor with 256K Flash, 64K RAM, a second 8-bit microcontroller which I assume drives the sensors and motor controls. It has a USB for connecting the brick to a computer for downloading new firmware and programs. It also has a Bluetooth interface so you can hook up to other devices, or even cooler, have multiple bricks talking to each other.
So do you get the sense this is on my Christmas list? You betcha. Can I justify it. Well not really. But it would be very cool to have Eclipse support for this target: CDT to work on the firmware and programs in C and DSDP components for target management. Maybe I can convince Ian and Mike that they need a cool demo for next year's Eclipse booth at ESC. hmmm.
Tuesday, May 02, 2006
CDT in Action on Big Projects
It's pretty common knowledge now that I use the Mozilla, and lately Firefox in particular, as my main test bed for scalability testing. It's a pretty big project and I have often found issues with the CDT in this environment and we are trying to address them as we can.
I was pleasently suprised the other day when by friend John Camelon (Mr. CDT Parser) brought the following article to my attention. It was written by Robert O'Callahan who blogged last summer about the promise of using the CDT for Mozilla development. At the bottom of this new article is a list of issues with using the CDT on Mozilla, a lot of which we are still working on and will feed into our CDT 4.0 requirements for next year.
I was also pleasently surprised when I went to take a look at the install instructions for ACE & TAO, a pretty big communications framework written in C++ that users have reported problems with in the past. In those instructions are instructions on how to use the CDT to develop ACE applications. Very cool.
It's hard to see how widespread the use of the CDT is out there, at least from where I sit in here. These two examples certainly have certainly opened my eyes a little. Now back to addressing their scalability problems...
I was pleasently suprised the other day when by friend John Camelon (Mr. CDT Parser) brought the following article to my attention. It was written by Robert O'Callahan who blogged last summer about the promise of using the CDT for Mozilla development. At the bottom of this new article is a list of issues with using the CDT on Mozilla, a lot of which we are still working on and will feed into our CDT 4.0 requirements for next year.
I was also pleasently surprised when I went to take a look at the install instructions for ACE & TAO, a pretty big communications framework written in C++ that users have reported problems with in the past. In those instructions are instructions on how to use the CDT to develop ACE applications. Very cool.
It's hard to see how widespread the use of the CDT is out there, at least from where I sit in here. These two examples certainly have certainly opened my eyes a little. Now back to addressing their scalability problems...
Tuesday, April 25, 2006
New Projects 101
It's funny how we are starting to have conversations on Planet Eclipse. But, I'd like to follow up Mike's post, which was following up John Graham's post, with my own. The topic is what to focus on when starting new projects, building a good platform for ISVs, or building good tools for end users. This is something I certainly struggled with in the early days of the CDT.
My take is that what you really need to be doing as a new project is building a good community. Now what does that mean? Having happy users and happy ISVs that like your stuff and want to use it or add it to their product portfolio is certainly an important thing. Having people talk good about your project shows momentum and serves as a magnet for those who don't want to miss out on your "next big thing".
More importantly however, to the growth of a new project is attracting developers to help you work on it, and by that I mean "add code". That has certainly been my biggest challenge as the CDT project lead, but it is something that I've had some success with in the last few months, and hope to have a bit more in the next few months (if all verbal commitments turn into CVS commits :). I don't know what the magic formula is, but to my previous point, we have shown momentum with the CDT and a high profile project is certainly appealing to developers (not to mention marketing people ;).
But I think more importantly, since most of the developers working on Eclipse work for commercial vendors, you need to make sure your project can easily meet their business needs. You need to make it easy for their employees to get involved and make it easy for them to be able to leverage off their investment in your project. Having a well managed project helps, as does having a good platform for them to add value, as well as good tools to make sure their end customers are happy as well.
So, I guess that means you need everything :(. But, my point is really that you need to look at more than just what you should be working on, but also how you should be working. You need to put a business friendly face on your project to help attract vendors. As well, I think we all need to educate vendors about the business of open source product management and help alleviate their fears, which I have seen time and time again. That is something I certainly need to work on more.
My take is that what you really need to be doing as a new project is building a good community. Now what does that mean? Having happy users and happy ISVs that like your stuff and want to use it or add it to their product portfolio is certainly an important thing. Having people talk good about your project shows momentum and serves as a magnet for those who don't want to miss out on your "next big thing".
More importantly however, to the growth of a new project is attracting developers to help you work on it, and by that I mean "add code". That has certainly been my biggest challenge as the CDT project lead, but it is something that I've had some success with in the last few months, and hope to have a bit more in the next few months (if all verbal commitments turn into CVS commits :). I don't know what the magic formula is, but to my previous point, we have shown momentum with the CDT and a high profile project is certainly appealing to developers (not to mention marketing people ;).
But I think more importantly, since most of the developers working on Eclipse work for commercial vendors, you need to make sure your project can easily meet their business needs. You need to make it easy for their employees to get involved and make it easy for them to be able to leverage off their investment in your project. Having a well managed project helps, as does having a good platform for them to add value, as well as good tools to make sure their end customers are happy as well.
So, I guess that means you need everything :(. But, my point is really that you need to look at more than just what you should be working on, but also how you should be working. You need to put a business friendly face on your project to help attract vendors. As well, I think we all need to educate vendors about the business of open source product management and help alleviate their fears, which I have seen time and time again. That is something I certainly need to work on more.
Saturday, April 22, 2006
Visual C++ Express Free Forever
Well, as I first found out from Ed Burnette and as I see now at the Microsoft site, Visual C++ Express is now free forever just like the CDT. I doubt anything I've said had anything do to with their change in strategy. I had a feeling, when I first saw the one year free deal, that they were just testing the waters and would eventually remove the restriction. I just didn't think it would happen so soon.
So does that make me give up on my wish to better support Windows development with the CDT. No way! There's more to it than just C/C++ development. I think Eclipse has so much to offer Windows developers that the Express Editions of VisualStudio just don't offer. As one great example, I can't wait to exploit more of TPTP static analysis and expanding it's integration with CDT's DOM. I think there is so much we can do there to make C++ programming more reliable and is something that VisualStudio doesn't offer in any form.
So I think this announcement just makes me want to improve Windows support even more that I'm going on a personal mission to make sure it happens. As always, anyone keen on helping me with this mission, please let me know. I have a ton of work to do with my regular QNX and CDT work so I'd appreciate any help I can get. But I think this is one area that can go a long way to bring the CDT and Eclipse to our much sought after Uberness.
So does that make me give up on my wish to better support Windows development with the CDT. No way! There's more to it than just C/C++ development. I think Eclipse has so much to offer Windows developers that the Express Editions of VisualStudio just don't offer. As one great example, I can't wait to exploit more of TPTP static analysis and expanding it's integration with CDT's DOM. I think there is so much we can do there to make C++ programming more reliable and is something that VisualStudio doesn't offer in any form.
So I think this announcement just makes me want to improve Windows support even more that I'm going on a personal mission to make sure it happens. As always, anyone keen on helping me with this mission, please let me know. I have a ton of work to do with my regular QNX and CDT work so I'd appreciate any help I can get. But I think this is one area that can go a long way to bring the CDT and Eclipse to our much sought after Uberness.
Tuesday, April 18, 2006
Cross Project Issues/Solutions
One of the things I love most about EclipseCon is that I get to see what the other projects are up to. It was really cool to see the progress that a lot of the projects that were just starting up last year have made since then. One thing I've noticed though is that a lot of them need to solve similar problems. I've been pretty happy to see how these projects are starting to work together to find common solutions.
One example is the Remote System Explorer that IBM has contributed to the DSDP Target Management project. It presents a view of remote systems and provides a framework for attaching services that connect to those systems in various ways. Once people have heard about it, everyone is now taking a look. My friends at HP are looking at it as a solution to remote development for their servers (I suppose that's very similar to how IBM uses it internally) that they'd like to contribute to the CDT. Also, I see that the Parallel Tools Project is now looking at it for remote development for their big supercomputer iron.
The question that comes to mind, though, is that if this is something that can be used by many other projects aside from embedded, is it problematic that this functionality resides in the DSDP project? My answer is that, well, the RSE actually resides on dev.eclipse.org. It is being managed by the DSDP/TM project. These are essentially two different things. Anyone can get at the bits and add dependencies to them. What could be problematic is the delivery schedules of the bits and making sure things line up. This is one reason that Callisto is so important, although releasing all at the same time is having it's own set of issues.
One thing I'm certain of, though. As Eclipse continues to grow, we are definitely going to run into the very same issues I've seen in my past with very large software projects. These issues can get out of hand without good architectural control over what we are building in order to make sure that we have good user and ISV experiences. That includes everything from reducing duplication to having common API and UI guidelines. I know a lot of people hate having to comply with guidelines, but I've seen what can happen when they don't and it ain't pretty.
I think the question Mike is really asking is what role the Eclipse Foundation should have in all this. I'm not totally sure what the answer is, but I believe the best architects are the guys mucking around in the code. They usually have their finger on the pulse of the beast and in the best position to make the right call at the right time. But what these guys really need is someone to help facilitate architectural decisions, i.e. bring the group together and to do some concensus building.
I've been lucky enough to attend one of the Eclipse Architectural Council meetings last December (I'm not a member but Bjorn graciously invited me to attend). There were some great minds in the room and Bjorn was doing a good job facilitating the discussions. But I don't remember seeing anything publicized from it, and we had a great discussion on using TPTP's AGR for UI testing. And, I can't remember if any architectural decisions were made.
I think the processes are pretty much in place. I'm just not sure whether we have all the right people involved. I wouldn't mind seeing what would happen if we brought the top senior committers to the table and asked them what they thought about this or that. I'm sure they already have all the answers. But then, these guys are also really busy getting their features done for Callisto...
One example is the Remote System Explorer that IBM has contributed to the DSDP Target Management project. It presents a view of remote systems and provides a framework for attaching services that connect to those systems in various ways. Once people have heard about it, everyone is now taking a look. My friends at HP are looking at it as a solution to remote development for their servers (I suppose that's very similar to how IBM uses it internally) that they'd like to contribute to the CDT. Also, I see that the Parallel Tools Project is now looking at it for remote development for their big supercomputer iron.
The question that comes to mind, though, is that if this is something that can be used by many other projects aside from embedded, is it problematic that this functionality resides in the DSDP project? My answer is that, well, the RSE actually resides on dev.eclipse.org. It is being managed by the DSDP/TM project. These are essentially two different things. Anyone can get at the bits and add dependencies to them. What could be problematic is the delivery schedules of the bits and making sure things line up. This is one reason that Callisto is so important, although releasing all at the same time is having it's own set of issues.
One thing I'm certain of, though. As Eclipse continues to grow, we are definitely going to run into the very same issues I've seen in my past with very large software projects. These issues can get out of hand without good architectural control over what we are building in order to make sure that we have good user and ISV experiences. That includes everything from reducing duplication to having common API and UI guidelines. I know a lot of people hate having to comply with guidelines, but I've seen what can happen when they don't and it ain't pretty.
I think the question Mike is really asking is what role the Eclipse Foundation should have in all this. I'm not totally sure what the answer is, but I believe the best architects are the guys mucking around in the code. They usually have their finger on the pulse of the beast and in the best position to make the right call at the right time. But what these guys really need is someone to help facilitate architectural decisions, i.e. bring the group together and to do some concensus building.
I've been lucky enough to attend one of the Eclipse Architectural Council meetings last December (I'm not a member but Bjorn graciously invited me to attend). There were some great minds in the room and Bjorn was doing a good job facilitating the discussions. But I don't remember seeing anything publicized from it, and we had a great discussion on using TPTP's AGR for UI testing. And, I can't remember if any architectural decisions were made.
I think the processes are pretty much in place. I'm just not sure whether we have all the right people involved. I wouldn't mind seeing what would happen if we brought the top senior committers to the table and asked them what they thought about this or that. I'm sure they already have all the answers. But then, these guys are also really busy getting their features done for Callisto...
Subscribe to:
Comments (Atom)