I've just been reading a few notes on the web about Palm's new Foleo "Mobile Companion". If you haven't seen it yet, it's a mini-laptop type thing that's intended to work with Palm's Treo smartphone. But under the hood it's really a mini-laptop that has wi-fi and Bluetooth connectivity and a USB port and SD slot for expansion. And it runs Linux with an Opera browser. The price is also pretty reasonable at $600 US and that's with a 10" screen and full size keyboard, although there's no function keys.
So, I'm not promoting the product. Yes, I know people from Palm who are contributing to Eclipse and the CDT, which I'm sure they're using in conjunction with this product. But I think it could the be the start of a trend. Everyone loves smartphones and getting their mail in Blackberry's and such, but the size of the screen and keyboard on these devices really limits their usefulness beyond their mobility. People still need laptops to do their real work.
But I think there's room in the mobility market for devices like this one. The embedded system-on-a-chips are there now to do it. And I think you'll even see games on these things with the 3d capabilities of these chips. With solid state memory like SD cards getting bigger and cheaper, these could be really useful little machines. Palm was first and it fits their niche, but I wonder if anyone else will take the plunge and make a more generally useful "mini-laptop."
Hey all. This blog records my thoughts of the day about my life on the Eclipse CDT project. I will occasionally give opinions and news regarding the Eclipse CDT - the project and its ecosystem - and on open source in general. Please feel free to comment on anything I say. I appreciate it when people are honest with me. And, please, please, consider all of these opinions mine, not of my employer.
Thursday, May 31, 2007
Wednesday, May 30, 2007
UML Action Semantics, Naturally Parallel
Earlier in my career, I had the honor of reviewing a part of the UML spec which involved a very surreal phone meeting with one of my heroes in this industry, Jim Rumbaugh. The area was on the UML's Action Semantics. At the time it was a separate spec but it is now intertwined in the Superstructure document as UML's Action behavior.
The idea according to Jim was to provide a sort of assembly language that all software behavior could map to. But I thought it provided a more powerful concept, that of the Action itself. An Action is a unit of behavior that has inputs, does some processing, and produces outputs. The outputs of one action feeds into the inputs of other actions. The "Ah-ha" is that all actions that have their inputs satisfied theoretically run in parallel.
This concept isn't new. Hardware designers have been thinking this way forever. I believe Petri nets present a similar idea in mathematical terms. But what struck me was this was a really powerful paradigm that can make it easier for programmers to write highly parallel programs. What was needed, though, was a good, 2-dimensional programming language that allowed programmers to create actions and hook up the inputs and outputs quickly and, of course, with minimal typing. But something like that really wasn't an objective for UML.
It's probably one of the reasons I'm keenly watching Eclipse's Modeling project. Aside from a great framework for creating domain specific languages, it has the capabilities that would be needed to build this "Action" language. And with a good back end that produced code for today's multi-core clusters, I really think this could be a good way to help programmers meet Intel's challenge that "Software has to double the amount of parallelism that it can support every two years" to catch up to the what the hardware guys are doing.
The idea according to Jim was to provide a sort of assembly language that all software behavior could map to. But I thought it provided a more powerful concept, that of the Action itself. An Action is a unit of behavior that has inputs, does some processing, and produces outputs. The outputs of one action feeds into the inputs of other actions. The "Ah-ha" is that all actions that have their inputs satisfied theoretically run in parallel.
This concept isn't new. Hardware designers have been thinking this way forever. I believe Petri nets present a similar idea in mathematical terms. But what struck me was this was a really powerful paradigm that can make it easier for programmers to write highly parallel programs. What was needed, though, was a good, 2-dimensional programming language that allowed programmers to create actions and hook up the inputs and outputs quickly and, of course, with minimal typing. But something like that really wasn't an objective for UML.
It's probably one of the reasons I'm keenly watching Eclipse's Modeling project. Aside from a great framework for creating domain specific languages, it has the capabilities that would be needed to build this "Action" language. And with a good back end that produced code for today's multi-core clusters, I really think this could be a good way to help programmers meet Intel's challenge that "Software has to double the amount of parallelism that it can support every two years" to catch up to the what the hardware guys are doing.
Monday, May 28, 2007
How Different Are Linux Distro's anyway?
In my years in software development since Linux has taken off in popularity enough to warrant commercial software vendors porting their wares to it, one thing I've seen vendors having trouble with is dealing with the massive number of different Linux flavours out there. Back when we were just Windows and Unix (commercial *nix's if you will), life was so much easier. The operating system vendors ensured that the releases were well defined so that we could easily put together a reasonable list of supported versions for our products.
With Linux, it really is next to impossible to do that. Novell and Red Hat do fill in that role as commercial Linux vendors that provide a stamp of approval over their versions of all the packages that go into a Linux distribution. But, really, none of the developers I know that are using Linux are using any of those commercial Linux'es. They're using Fedora, OpenSuSE, and more lately Ubuntu. It really is impossible to validate your products against all the possible combinations of Linux that your customers may want to use.
But, I then ask the question, so what? How different are these distributions anyway that makes it so hard to support Linux. Yes, you may have version differences in the packages, and things like the major versions of GTK can break under GUI applications like Eclipse. Also, it's pretty confusing the number of different ways to set up user's environment variables, but then applications shouldn't be relying on that anyway. I really wonder if there's much else that can affect most software products.
It bugs me every time someone tries to explain away a bug with, sorry, that version of Linux isn't a reference platform so we can't look at your problem, especially when the person is using a recent distro like Ubuntu. But it really does speak to the challenges that software vendors face with the fragmentation of the Linux market. But I guess it's part of the price we pay for "freedom".
With Linux, it really is next to impossible to do that. Novell and Red Hat do fill in that role as commercial Linux vendors that provide a stamp of approval over their versions of all the packages that go into a Linux distribution. But, really, none of the developers I know that are using Linux are using any of those commercial Linux'es. They're using Fedora, OpenSuSE, and more lately Ubuntu. It really is impossible to validate your products against all the possible combinations of Linux that your customers may want to use.
But, I then ask the question, so what? How different are these distributions anyway that makes it so hard to support Linux. Yes, you may have version differences in the packages, and things like the major versions of GTK can break under GUI applications like Eclipse. Also, it's pretty confusing the number of different ways to set up user's environment variables, but then applications shouldn't be relying on that anyway. I really wonder if there's much else that can affect most software products.
It bugs me every time someone tries to explain away a bug with, sorry, that version of Linux isn't a reference platform so we can't look at your problem, especially when the person is using a recent distro like Ubuntu. But it really does speak to the challenges that software vendors face with the fragmentation of the Linux market. But I guess it's part of the price we pay for "freedom".
Friday, May 25, 2007
cdt-dev is my office
Today is RC2 day for CDT 4. As we get closer to milestone build dates, I send out friendly reminders to the cdt-dev mailing list on what bugs are still open against that milestone. It's just a prod to get the developers to do something about them so that we have no open bugs on a milestone when we do the build. It's worked every time as we get the friendly but odd 'Zarro bugs found' message from my bugzilla query when we're ready.
RC2 was no different. This morning we had two left. Ken from Austin, Texas gave an update on his asking for feedback. I, from Ottawa, Canada gave some feedback for him to go ahead and fix it. Bala from London, England mentioned he had a patch ready for his and Mikhail from Russia replied saying he was looking at it. I'm confident we'll be ready in a couple of hours to fire off the build and get the RC out by the end of the day.
This happens regularly on the CDT and once in a while I stand back and think of what just happened (and I think I've probably blogged about this before too). We have a very effective development team working on the CDT, and the cdt-dev mailing list is the backbone of that collaboration. A lot of groups use different technologies such as instant messenger or IRC channels, but for us the cdt-dev mailing list works great. Bugzilla comes in at a close second. But then, we treat bugzillas as mini mailing lists anyway.
I think the biggest benefit of the cdt-dev list is that it's open to anyone. If you want to see what's happening with the CDT at a high level, that's the place to go. If you want more detail, then you really got to watch the bug reports and signing up to receive notifications on the cdt-*-inbox accounts are the best way to catch the train.
From my experience on the CDT, the most important tool you have to build a community is open communication, like mailing lists, forums, IRC. As your community grows, the only way to really talk to them all is via open communication, so it really forces you down that path and you end up doing it anyway. But in the early days, it was a hard habit to get into, especially when QNX was by itself, or even when I was at Rational and we started working with the QNX gang who were only a 5 minute drive from the Rational office. But open communication has really paid off in the end for the CDT and the reach of our cdt-dev mailing list impresses me time and time again.
RC2 was no different. This morning we had two left. Ken from Austin, Texas gave an update on his asking for feedback. I, from Ottawa, Canada gave some feedback for him to go ahead and fix it. Bala from London, England mentioned he had a patch ready for his and Mikhail from Russia replied saying he was looking at it. I'm confident we'll be ready in a couple of hours to fire off the build and get the RC out by the end of the day.
This happens regularly on the CDT and once in a while I stand back and think of what just happened (and I think I've probably blogged about this before too). We have a very effective development team working on the CDT, and the cdt-dev mailing list is the backbone of that collaboration. A lot of groups use different technologies such as instant messenger or IRC channels, but for us the cdt-dev mailing list works great. Bugzilla comes in at a close second. But then, we treat bugzillas as mini mailing lists anyway.
I think the biggest benefit of the cdt-dev list is that it's open to anyone. If you want to see what's happening with the CDT at a high level, that's the place to go. If you want more detail, then you really got to watch the bug reports and signing up to receive notifications on the cdt-*-inbox accounts are the best way to catch the train.
From my experience on the CDT, the most important tool you have to build a community is open communication, like mailing lists, forums, IRC. As your community grows, the only way to really talk to them all is via open communication, so it really forces you down that path and you end up doing it anyway. But in the early days, it was a hard habit to get into, especially when QNX was by itself, or even when I was at Rational and we started working with the QNX gang who were only a 5 minute drive from the Rational office. But open communication has really paid off in the end for the CDT and the reach of our cdt-dev mailing list impresses me time and time again.
Tuesday, May 22, 2007
Open Source Ripped?
Bjorn made me read this article by Howard Anderson. Well, he didn't make me, but it is a topic I'm very interested in as well: How does open source make sense in a commercial world?
It's actually a very interesting article. When I finished it, I had to remind myself of the title. In the end, I wasn't sure if he was for or against open source. His general thesis seems to be that open source is a tool used by small companies to gain market share against big companies. Yes, he's right. I've seen that. There are a lot of smaller companies shipping a world class IDE with their products making them more attractive. They leverage open source (i.e. Eclipse) to do it to lower costs since building a world class IDE is prohibitively expensive for most. I think it's a great business model. I guess he was just looking at it from the big proprietary company side.
There are a couple of areas where I have to disagree with Howard, though. He mentions open source is a "religion". Well in some circles, I guess open source participants see it that way. Certainly from the outside it looks like Richard Stallman is playing the part of a religious leader, and FSF is his church.
Howard also seems to believe that the people writing open source are doing so at night when they come home from their real jobs of working on proprietary software. But that's not what I do as an open source developer. Open source is my day job. The company I work for is one of those companies that is reaping the benefits of the open source business model, and is willing to invest in open source to help build a community where we can share the work with each other. And there are lots of developers like me from many companies. Open source is not a religion to us, but a business means to a business end.
So while it'll probably be impossible to shake the stigma of the open source "religion" from what we do, open source in the spirit of "co-opetition" (co-operating competitors) is a vital tool available to the commercial world. Some communities are set up for this to work well, like Eclipse, while others, not as much (and I won't name them unless over a beer :). But the ones that are, seem to be the ones that the big proprietary companies fear the most. Which means we must be on to something...
It's actually a very interesting article. When I finished it, I had to remind myself of the title. In the end, I wasn't sure if he was for or against open source. His general thesis seems to be that open source is a tool used by small companies to gain market share against big companies. Yes, he's right. I've seen that. There are a lot of smaller companies shipping a world class IDE with their products making them more attractive. They leverage open source (i.e. Eclipse) to do it to lower costs since building a world class IDE is prohibitively expensive for most. I think it's a great business model. I guess he was just looking at it from the big proprietary company side.
There are a couple of areas where I have to disagree with Howard, though. He mentions open source is a "religion". Well in some circles, I guess open source participants see it that way. Certainly from the outside it looks like Richard Stallman is playing the part of a religious leader, and FSF is his church.
Howard also seems to believe that the people writing open source are doing so at night when they come home from their real jobs of working on proprietary software. But that's not what I do as an open source developer. Open source is my day job. The company I work for is one of those companies that is reaping the benefits of the open source business model, and is willing to invest in open source to help build a community where we can share the work with each other. And there are lots of developers like me from many companies. Open source is not a religion to us, but a business means to a business end.
So while it'll probably be impossible to shake the stigma of the open source "religion" from what we do, open source in the spirit of "co-opetition" (co-operating competitors) is a vital tool available to the commercial world. Some communities are set up for this to work well, like Eclipse, while others, not as much (and I won't name them unless over a beer :). But the ones that are, seem to be the ones that the big proprietary companies fear the most. Which means we must be on to something...
Thursday, May 17, 2007
Bye-bye 32-bit Windows
I just read that Microsoft was putting an end to 32-bit support with their operating systems. I guess that shouldn't really be a surprise. It is a real struggle for device driver writers to support both and I think Windows 64-bit has really been hurt by that. The same is probably true for Linux as we don't see much demand for the 64-bit Linux version of the CDT either at 2.5% versus the 32-bit Linux at 20%.
Maybe this will finally trigger people to focus more on 64-bit and start writing programs with that in mind. The biggest change for C/C++ programmers is the size of pointers changes. I've seen a lot of code that assumes you can merrily cast a pointer into an int and back and everything is happy. One example of this is with Java native code where we like to stow away pointers in Java fields for later calls back into native-land. Well with 64-bit, while the size of pointers change, the size of int does not. I'm also hearing that there are different interpretations of the sizeof(long) where on some platforms it's 64-bits, but on others it stays as 32. Then there's long long (gcc) and int64 (msvc) which in the 32-bit world also means 64-bits.
Suffice it to say that the 64-bit world gets a little messy. We'll think back to the simple days of 32-bit with a sigh. But then, I think things will still be better than the now ancient 16-bit world was (now who's old enough to remember that?). Now they say that this will start after 2008, but given the length of time it took to get Vista out, people with 32-bit only machines shouldn't worry too much. They'll be ready for the dumpster by then anyway. And do we really need another operating system version beyond Vista? Microsoft hopes so, and I'm sure they'll use the 64-bit push as a marketing ploy to help you think so too.
Maybe this will finally trigger people to focus more on 64-bit and start writing programs with that in mind. The biggest change for C/C++ programmers is the size of pointers changes. I've seen a lot of code that assumes you can merrily cast a pointer into an int and back and everything is happy. One example of this is with Java native code where we like to stow away pointers in Java fields for later calls back into native-land. Well with 64-bit, while the size of pointers change, the size of int does not. I'm also hearing that there are different interpretations of the sizeof(long) where on some platforms it's 64-bits, but on others it stays as 32. Then there's long long (gcc) and int64 (msvc) which in the 32-bit world also means 64-bits.
Suffice it to say that the 64-bit world gets a little messy. We'll think back to the simple days of 32-bit with a sigh. But then, I think things will still be better than the now ancient 16-bit world was (now who's old enough to remember that?). Now they say that this will start after 2008, but given the length of time it took to get Vista out, people with 32-bit only machines shouldn't worry too much. They'll be ready for the dumpster by then anyway. And do we really need another operating system version beyond Vista? Microsoft hopes so, and I'm sure they'll use the 64-bit push as a marketing ploy to help you think so too.
Wednesday, May 16, 2007
Eclipse Wants You!
A few days ago I posted a blog entry worrying about the openness of the Eclipse Platform team. I thought it would generate a flood of comments saying I was wrong. And to a certain extent, I probably am wrong. But I think its a serious issue that people need to think about.
I guess the point I was really trying to make is that we as the Eclipse community outside of the Platform have depended too much on IBM/OTI's great contributions, to the point where we expect them to fix all of our problems. My experience with open source projects is that it just doesn't work that way.
Open source developers usually work on open source projects for a reason. They are trying to get something done for themselves, and really as a side affect they hope that others will find it useful as well and maybe come help out. Because open source software is free, I think people start to think it's more like a charity, but it isn't. And I think this is even a bigger factor with Eclipse since the vast majority of developers are employed to work on Eclipse projects. They respond to the community as much as they can, but at the end of the day, if their empoyer asks them to work on something else, that has to have priority.
So if you have a bug that isn't getting the attention you think it deserves, please think of the people at the other end. There's a good chance it's not because they think you're problem isn't important, but that they have probably been assigned work elsewhere and really just don't have the time. Do as much leg work as you can. Create a really good bug report that has a patch and a really good justification that shows you thought about the fix as much as the committer would have. Make it as easy for the committer to fix your problem as you can.
And if you find you really depend on certain functionality that isn't being provided or bugs that you really need fixed, and you do enough great patches, you can become a committer too. The more committers we get from different employers, the better off we'll all be. That kind of redundancy is important in open source and is something we've really learned to appreciate on the CDT project.
I guess the point I was really trying to make is that we as the Eclipse community outside of the Platform have depended too much on IBM/OTI's great contributions, to the point where we expect them to fix all of our problems. My experience with open source projects is that it just doesn't work that way.
Open source developers usually work on open source projects for a reason. They are trying to get something done for themselves, and really as a side affect they hope that others will find it useful as well and maybe come help out. Because open source software is free, I think people start to think it's more like a charity, but it isn't. And I think this is even a bigger factor with Eclipse since the vast majority of developers are employed to work on Eclipse projects. They respond to the community as much as they can, but at the end of the day, if their empoyer asks them to work on something else, that has to have priority.
So if you have a bug that isn't getting the attention you think it deserves, please think of the people at the other end. There's a good chance it's not because they think you're problem isn't important, but that they have probably been assigned work elsewhere and really just don't have the time. Do as much leg work as you can. Create a really good bug report that has a patch and a really good justification that shows you thought about the fix as much as the committer would have. Make it as easy for the committer to fix your problem as you can.
And if you find you really depend on certain functionality that isn't being provided or bugs that you really need fixed, and you do enough great patches, you can become a committer too. The more committers we get from different employers, the better off we'll all be. That kind of redundancy is important in open source and is something we've really learned to appreciate on the CDT project.
Tuesday, May 15, 2007
I love bugs from playstation.sony.com
I have no idea what this group is doing with the CDT. They've been on the outskirts of our community for quite a while and this bug was just marked VERIFIED to show they are still around. In fact, one of my first experiences as a CDT "dude" was at the first EclipseCon where someone from the Sony Playstation group stopped Sebastien, the first CDT project lead, and I and asked us for information about CDT extensibility. We thought it was pretty cool then, and it's still cool now.
Forget OO, C++ is a better C
I was working on bug 176353 trying to get interrupt signals sent to cygwin-based gdbs. Part of the magic for that in the CDT is something called the Spawner. Spawner subclasses Java's Process and adds the ability to send Unix type signals to them. It also implements some fancy I/O but that isn't supported on Windows. This class is a little Java and a lot JNI native code that interacts with the operating system to implement the signals as well as the other Process type things like starting up and waiting for completion of the process.
So, I guess this code was written originally with Visual Studio 6 many moons ago. However, continuing with my theme of using MinGW for Windows development, I've created makefiles to build the spawner DLL. What made this situation a little weird was that the original creators of the spawner were guys who didn't really know C++, so they did it in C. I always find it weird doing C in VS, but that's what these guys did. So when I wrote the makefile, I used make's default of running gcc on these files. Makes sense.
However, I was having trouble when I added a calls to a couple of Windows routines. I was getting undefined references at link time to the two calls I added. Weird, I didn't get any compile errors. When that happens it usually means I forgot to add the library. So I added it and still got the errors. What's going on here? Is it something broken in the MinGW port of these libraries? Was my code just getting tired and cranky?
Well for some reason, maybe I was getting tired and cranky, I wondered if it was because I was using C instead of C++. Part of my debugging technique is to start assuming the least likely cause and testing it out. This was really damn unlikely. But I tried it out. I changed the compiler to be g++ and ran it over the .c files. I thought it would have treated them as C files and nothing would really be different.
But to my surprise, g++ compiled the .c files as C++. And I got a ton of errors. And almost all the errors were for passing the wrong types to functions, especially since I was using UNICODE and not everything was really using wchar_t. Well, no wonder things weren't working. Of course the one thing it found were compile errors with the two functions I was calling. I had forgot to add the include to the header that defined them which specified the correct calling convention, WINAPI, which is why I was getting the link errors in C. But then C was happy to play along without having to see the declarations of the functions and made some bad assumptions. I wasted a good couple of hours trying to figure out what was wrong.
So, forget the object-oriented, templates, namespaces, operator overloading, and all the other cool features of C++, C++ at its core is just a much better C. It has proper type checking that helps you find those errors before link time, or worse, run time. It all feeds into helping you build better software faster, which is what this whole tools industry is all about. And if you've programmed in C++ for years and have to go back to C, don't forget to make that paradigm shift back to the 80's...
So, I guess this code was written originally with Visual Studio 6 many moons ago. However, continuing with my theme of using MinGW for Windows development, I've created makefiles to build the spawner DLL. What made this situation a little weird was that the original creators of the spawner were guys who didn't really know C++, so they did it in C. I always find it weird doing C in VS, but that's what these guys did. So when I wrote the makefile, I used make's default of running gcc on these files. Makes sense.
However, I was having trouble when I added a calls to a couple of Windows routines. I was getting undefined references at link time to the two calls I added. Weird, I didn't get any compile errors. When that happens it usually means I forgot to add the library. So I added it and still got the errors. What's going on here? Is it something broken in the MinGW port of these libraries? Was my code just getting tired and cranky?
Well for some reason, maybe I was getting tired and cranky, I wondered if it was because I was using C instead of C++. Part of my debugging technique is to start assuming the least likely cause and testing it out. This was really damn unlikely. But I tried it out. I changed the compiler to be g++ and ran it over the .c files. I thought it would have treated them as C files and nothing would really be different.
But to my surprise, g++ compiled the .c files as C++. And I got a ton of errors. And almost all the errors were for passing the wrong types to functions, especially since I was using UNICODE and not everything was really using wchar_t. Well, no wonder things weren't working. Of course the one thing it found were compile errors with the two functions I was calling. I had forgot to add the include to the header that defined them which specified the correct calling convention, WINAPI, which is why I was getting the link errors in C. But then C was happy to play along without having to see the declarations of the functions and made some bad assumptions. I wasted a good couple of hours trying to figure out what was wrong.
So, forget the object-oriented, templates, namespaces, operator overloading, and all the other cool features of C++, C++ at its core is just a much better C. It has proper type checking that helps you find those errors before link time, or worse, run time. It all feeds into helping you build better software faster, which is what this whole tools industry is all about. And if you've programmed in C++ for years and have to go back to C, don't forget to make that paradigm shift back to the 80's...
Monday, May 14, 2007
A lesson in release management
One of the first things I remember learning about managing projects came well before I even considered doing so. It came from the lore at the big telecomm company I worked at. They had an old telecomm switch that was doing quite well sales wise, but they had started working on their latest and greatest architecture that would pave the way to the future (which it did in the end). However, they were so excited, they started announcing it to their customers well before the release date.
Well, guess what happened. The customers got excited, too. They didn't want to buy the old switches anymore, they wanted the cool new one. Unfortunately, the dates ended up getting delayed and that spelled trouble since sales of the old switches were drying up. Lesson learned, though, and you notice a lot of companies holding back release information for that very reason.
Well, I think the same thing is happening to the CDT. For the first three months of this year, we've been hovering around the 65,000 downloads mark. It's not our biggest. That happened last October and November when we hit 85,000. But it was steady.
Well, I just did the numbers for April and found them at a disappointing 55,000. Maybe it's just a glitch. Maybe people are happy with getting the CDT from other places, like Linux distributions.
But it makes me wonder if this is a side-affect of CDT 4. We've been making a lot of noise about it, and we're finding that a lot of people are using the CDT 4 milestone builds, especially starting at M6 which just happened to be at the beginning of April. I haven't been counting the milestone builds in our figures.
We'll see how May's numbers are, but it would be interesting if we're seeing the pre-announcement affect in open source projects too. And I guess, why not?
Well, guess what happened. The customers got excited, too. They didn't want to buy the old switches anymore, they wanted the cool new one. Unfortunately, the dates ended up getting delayed and that spelled trouble since sales of the old switches were drying up. Lesson learned, though, and you notice a lot of companies holding back release information for that very reason.
Well, I think the same thing is happening to the CDT. For the first three months of this year, we've been hovering around the 65,000 downloads mark. It's not our biggest. That happened last October and November when we hit 85,000. But it was steady.
Well, I just did the numbers for April and found them at a disappointing 55,000. Maybe it's just a glitch. Maybe people are happy with getting the CDT from other places, like Linux distributions.
But it makes me wonder if this is a side-affect of CDT 4. We've been making a lot of noise about it, and we're finding that a lot of people are using the CDT 4 milestone builds, especially starting at M6 which just happened to be at the beginning of April. I haven't been counting the milestone builds in our figures.
We'll see how May's numbers are, but it would be interesting if we're seeing the pre-announcement affect in open source projects too. And I guess, why not?
gdb is a-calling
I've been working over the past week trying to get process interrupts working on Windows so that we can get suspend working again with gdb. The suspend call gets translated to a SIGINT signal raised on the process, which on Windows gets converted into a Control C event. For some reason it stopped working lately, mind you I don't remember ever actually trying it. It was fun, though. It got me back to my roots as a Windows developer and renewed my dislike for Hungarian notation.
Now that I got it working, though. I am finding that when you suspend gdb, both the cygwin version and the mingw version, that you end up in a pretty useless state. The stack trace has gdb totally confused. The addresses don't even look like real addresses. I thought for a moment that it was because of something I did in the CDT. But when I tried it from the command-line I got the same result. I did a Google search and looked at the cygwin and mingw forums and couldn't find anything useful. Maybe no one cares. Maybe it's my machine. Who knows (do you?).
At any rate, it has led me down a path to take a deeper look at how gdb is implemented and try and debug it myself. This is something that I've wanted to do for a while. We use gdb as our debugger in QNX Momentics so I may be able to help out our gdb guys too. It'll also give me something else to test the CDT with. I'm not sure we've tested Makefile projects enough in CDT 4 so this will give me a chance to do that. My focus will be on MinGW gdb and it would be cool to be able to contribute any fixes I come up with, or even a port to the latest gdb back to these guys.
Now that I got it working, though. I am finding that when you suspend gdb, both the cygwin version and the mingw version, that you end up in a pretty useless state. The stack trace has gdb totally confused. The addresses don't even look like real addresses. I thought for a moment that it was because of something I did in the CDT. But when I tried it from the command-line I got the same result. I did a Google search and looked at the cygwin and mingw forums and couldn't find anything useful. Maybe no one cares. Maybe it's my machine. Who knows (do you?).
At any rate, it has led me down a path to take a deeper look at how gdb is implemented and try and debug it myself. This is something that I've wanted to do for a while. We use gdb as our debugger in QNX Momentics so I may be able to help out our gdb guys too. It'll also give me something else to test the CDT with. I'm not sure we've tested Makefile projects enough in CDT 4 so this will give me a chance to do that. My focus will be on MinGW gdb and it would be cool to be able to contribute any fixes I come up with, or even a port to the latest gdb back to these guys.
Friday, May 11, 2007
AMD does big parallel, too.
Just read this at my favorite hardware site, Tom's Hardware (although I'm not so happy with their new look and feel). AMD is showing off their upcoming uber enthusiast platform which includes dual Agena FX quad-core processors and their (or ATI's, I'm still finding the new world order there weird) R600 multi-thread (320?) stream processor GPU based graphics card.
The demo was doing real-time face recognition on passers-by. It really shows that what used to be done as batch jobs can now be done in real-time with consumer massively parallel systems. We always wonder why we need more horse power - my web browser is working fine on my 3 year old system at home. But I think we're hitting a tipping point for a number of really cool applications that will change the way Joe Consumer looks at computers.
Which makes me even more excited to get into the raytracing hobby time project I've started to show off the CDT as an IDE for "really cool stuff". Now, to figure out how to get one of these machines :)
The demo was doing real-time face recognition on passers-by. It really shows that what used to be done as batch jobs can now be done in real-time with consumer massively parallel systems. We always wonder why we need more horse power - my web browser is working fine on my 3 year old system at home. But I think we're hitting a tipping point for a number of really cool applications that will change the way Joe Consumer looks at computers.
Which makes me even more excited to get into the raytracing hobby time project I've started to show off the CDT as an IDE for "really cool stuff". Now, to figure out how to get one of these machines :)
Sunday, May 06, 2007
Need some faster floating point?
As I mentioned a couple of days ago, my new hobby is raytracing. I have a curious mind (in more ways than one :) and I always wonder how computers do the things they do. So when my sons got old enough to enjoy video games, we started down that path, and, of course, my curiosity lead me into the world of game programming. In order to make games look good, game programmers use a lot of tricks and play with optical allusions a lot. It's pretty complicated and every time I tried to get into it more, the real world would pull me out and make me deal with it.
In researching my recent blog entry on raytracing, I found a sweet elegance that I always look for in computer architectures. The algorithms are pretty straightforward, albeit pretty compute intensive, so the barrier to entry into this area seems low enough that I can work on it when I get a chance. It also looks to benefit immensely from parallel processing, another interest of mine, and will get me into that area as well. That, and the demos I saw showed some wicked shadow affects that really added to the realism of scenes, so it'll be cool to show off as a CDT demo as well (as opposed to the spinning polygons I use as an SDL/OpenGL demo right now that you may have seen at ESC).
My first step was to build a vector class that does math with 3D vectors, a critical component of all graphics programming. The sample I was looking at used regular C++ floating point arithmetic with a vector composed of a float for each of the three axis.
Pretty basic. But this is the first example of an algorithm that can benefit from parallelism. Since I have a fairly new laptop, I wondered if I could leverage SSE, Streaming SIMD Extensions to implement this. I also wondered how well gcc and the MinGW variant I'm using handles SSE. So I gave it a try.
The constructor is a bit more complicated. And with most things dealing with SSE, 16 byte alignment is critical for good performance. And looking at the generated assembly, I was pleased to see that gcc, after making sure I put the -msse2 option on the compile, worked hard at keeping the instances of __m128 aligned like that. The performance tests I ran with addition showed an O.K improvement in performance, especially as the number of math operations grew. But when I tried multiplication instead of addition, the performance gains were astronomical. Well worth the extra typing.
Now that I've got that under my belt, I can't wait to actually draw something...
In researching my recent blog entry on raytracing, I found a sweet elegance that I always look for in computer architectures. The algorithms are pretty straightforward, albeit pretty compute intensive, so the barrier to entry into this area seems low enough that I can work on it when I get a chance. It also looks to benefit immensely from parallel processing, another interest of mine, and will get me into that area as well. That, and the demos I saw showed some wicked shadow affects that really added to the realism of scenes, so it'll be cool to show off as a CDT demo as well (as opposed to the spinning polygons I use as an SDL/OpenGL demo right now that you may have seen at ESC).
My first step was to build a vector class that does math with 3D vectors, a critical component of all graphics programming. The sample I was looking at used regular C++ floating point arithmetic with a vector composed of a float for each of the three axis.
class vector {
public:
vector(float _x, float _y, float _z)
: x(_x), y(_y), z(_z) { }
void operator +=(const vector & v) {
x += v.x; y += v.y; z += v.z;
}
private:
float x, y, z;
};
Pretty basic. But this is the first example of an algorithm that can benefit from parallelism. Since I have a fairly new laptop, I wondered if I could leverage SSE, Streaming SIMD Extensions to implement this. I also wondered how well gcc and the MinGW variant I'm using handles SSE. So I gave it a try.
class vector {
public:
vector(float _x, float _y, float _z) {
float array[4] __attribute__((aligned(16)))
= { x, y, z, 1 };
xyz = _mm_load_ps(array);
}
void operator +=(const vector & v) {
xyz += v.xyz;
}
private:
__m128 xyz;
}
The constructor is a bit more complicated. And with most things dealing with SSE, 16 byte alignment is critical for good performance. And looking at the generated assembly, I was pleased to see that gcc, after making sure I put the -msse2 option on the compile, worked hard at keeping the instances of __m128 aligned like that. The performance tests I ran with addition showed an O.K improvement in performance, especially as the number of math operations grew. But when I tried multiplication instead of addition, the performance gains were astronomical. Well worth the extra typing.
Now that I've got that under my belt, I can't wait to actually draw something...
Thursday, May 03, 2007
They're great, but are they open enough?
This is something I normally only talk about after a few beers at EclipseCon. But I think I need to get it off my chest, especially after seeing a recent e-mail on the eclipse-dev list. I'll apologize in advance if I'm getting this wrong and feel free to correct me.
Now first of all, don't get me wrong, I am one of the biggest fans of the gang at IBM's OTI office (still more OTI than IBM they are) and a lot of them are good friends. The quality of the platform and the great extensibility it offers is what has made Eclipse what it is today. We'd all still be in the dark ages if it wasn't for their great work.
And, you know, I think they've come a long way as far as working in the open goes. When we started the CDT, it was really hard to know what they were doing and many times we were surprised by API and functionality changes in milestone builds that required us to scramble to fix up. I think on both sides, CDT and Platform, we've gone that extra mile to make sure we communicate better as committers. We've even received patches from the Platform team to make sure we didn't break when changes did occur.
The thing that has me concerned was highlighted in a mail that just came across on the eclipse-dev list from Kim Moir (sorry Kim, you're just the messenger). "I just talked to McQ regarding the plan. This is what he said... -Component leads can make rules more strict as they see fit..." And the mail goes on a bit longer to talk about the endgame rules for accepting changes.
Now it is my understanding that it's really up to the individual projects to decide what their development processes are, and I guess this is what the Eclipse project (or was it the Platform sub-project) has decided their processes to be. And you know, looking at the rules McQ has set out, they really are trying to give more power to the committers and my first was reaction was that this was a positive step in the right direction.
But, personally, in my role as the CDT Project Lead (which is also a sub-project which makes me a peer to McQ, who is IBM's Mike Wilson, BTW), the last thing I want to do is dictate to my committers what the endgame rules would be. Actually the last thing I want to do is dictate anything. Maybe it's just the way I am, but I feel the responsibility for the processes and guidelines falls with the committer group as a whole. If they don't agree, I'm actually powerless to stop them anyway so I'm really just a facilitator. Mind you, to make sure we have rules, I usually suggest something and if I get no feedback, which happens a lot, we assume everyone agrees. But in the end it should be the committers that decide.
Obviously the Eclipse team is set up differently. A lot of it is historical and due to the organization of the team, both as an Eclipse PMC as an an organization at IBM. But I would really like to see the Eclipse team open up their processes and decision making more and be a bit more transparent.
Now first of all, don't get me wrong, I am one of the biggest fans of the gang at IBM's OTI office (still more OTI than IBM they are) and a lot of them are good friends. The quality of the platform and the great extensibility it offers is what has made Eclipse what it is today. We'd all still be in the dark ages if it wasn't for their great work.
And, you know, I think they've come a long way as far as working in the open goes. When we started the CDT, it was really hard to know what they were doing and many times we were surprised by API and functionality changes in milestone builds that required us to scramble to fix up. I think on both sides, CDT and Platform, we've gone that extra mile to make sure we communicate better as committers. We've even received patches from the Platform team to make sure we didn't break when changes did occur.
The thing that has me concerned was highlighted in a mail that just came across on the eclipse-dev list from Kim Moir (sorry Kim, you're just the messenger). "I just talked to McQ regarding the plan. This is what he said... -Component leads can make rules more strict as they see fit..." And the mail goes on a bit longer to talk about the endgame rules for accepting changes.
Now it is my understanding that it's really up to the individual projects to decide what their development processes are, and I guess this is what the Eclipse project (or was it the Platform sub-project) has decided their processes to be. And you know, looking at the rules McQ has set out, they really are trying to give more power to the committers and my first was reaction was that this was a positive step in the right direction.
But, personally, in my role as the CDT Project Lead (which is also a sub-project which makes me a peer to McQ, who is IBM's Mike Wilson, BTW), the last thing I want to do is dictate to my committers what the endgame rules would be. Actually the last thing I want to do is dictate anything. Maybe it's just the way I am, but I feel the responsibility for the processes and guidelines falls with the committer group as a whole. If they don't agree, I'm actually powerless to stop them anyway so I'm really just a facilitator. Mind you, to make sure we have rules, I usually suggest something and if I get no feedback, which happens a lot, we assume everyone agrees. But in the end it should be the committers that decide.
Obviously the Eclipse team is set up differently. A lot of it is historical and due to the organization of the team, both as an Eclipse PMC as an an organization at IBM. But I would really like to see the Eclipse team open up their processes and decision making more and be a bit more transparent.
CDT 4.0 RC0 Now Available
It's been a crazy week or so as we worked feverishly to get our RC0 build together. This build marks our feature completion date where all new features have been implemented. We did this a bit earlier than the Platform (which is the end of this week, we actually locked down at the end of last week) to allow us more time for testing and bug fixing.
This is by far the biggest CDT release we've done since the first one as far as the number of new features go which, of course, introduced a lot of risk as far as introducing bugs as well. We also need to be careful about backwards compatibility too so that users can use their old workspaces with the new CDT. The best way to help us out right now is to download the RC0 build and give it a try and raise bugs on any problems you find. Visit the CDT web page to find out how.
I've signed up to do a webinar mid July to show off the new CDT. It'll probably take that long to learn it all, too...
This is by far the biggest CDT release we've done since the first one as far as the number of new features go which, of course, introduced a lot of risk as far as introducing bugs as well. We also need to be careful about backwards compatibility too so that users can use their old workspaces with the new CDT. The best way to help us out right now is to download the RC0 build and give it a try and raise bugs on any problems you find. Visit the CDT web page to find out how.
I've signed up to do a webinar mid July to show off the new CDT. It'll probably take that long to learn it all, too...
Tuesday, May 01, 2007
Ubuntu + Dell, Significant? You betcha.
I just picked this up from ZDNet, another of my favorite sites for picking up industry news. Dell has signed a deal with Canonical, the makers of Ubuntu Linux, to provide support for Ubuntu on it's consumer PC line. This isn't the first time Dell has tried to enter the Linux fray, but word has it this time will be different.
I think this is actually a huge deal for Linux on the desktop and there are a couple of reasons why I think this will fly. First of all, Ubuntu is hugely popular in the Linux world, especially with people who are, not necessarily zealots, but really want to get into Linux with low overhead. And from what I can see, and apparently what Dell has found out, that's a lot of people. I'm probably in that category and a lot of the articles I've read about Ubuntu are written by people in that category too.
The second big reason why this is different is the support deal with Canonical. Essentially, you can use Ubuntu on a Dell system without worrying about whether the drivers will work with all the hardware you're buying. This has always been my major stumbling block with Linux (other than the crappy fonts :). I had Linux running on a Dell laptop five years ago and struggled to get things like suspend and the graphics display working correctly. This should address that. And the deal will ensure Canonical has a vested interest in making sure it works.
And vested they are. This has to be huge for Canonical. I always wondered how they intended on making money given all the investment they've made in Ubuntu to make it consumer friendly just to give it away for free. This is it, and I shouldn't be surprised. I've always thought that you can't make money on Linux by selling it in a box. You make money selling services or other products that leverage Linux, just as we do with Eclipse.
This has clearly been Canonical's strategy and they've started down a road where it'll pay off. Making Ubuntu consumer friendly has actually created a market for Linux PCs. Dell is just responding to the desires of that market. Mind you, it's not for the weak at heart and it really was a huge gamble, but luckily for Canonical, their CEO Mark Shuttleworth came with deep pockets to make it happen. But happening it is.
I think this is actually a huge deal for Linux on the desktop and there are a couple of reasons why I think this will fly. First of all, Ubuntu is hugely popular in the Linux world, especially with people who are, not necessarily zealots, but really want to get into Linux with low overhead. And from what I can see, and apparently what Dell has found out, that's a lot of people. I'm probably in that category and a lot of the articles I've read about Ubuntu are written by people in that category too.
The second big reason why this is different is the support deal with Canonical. Essentially, you can use Ubuntu on a Dell system without worrying about whether the drivers will work with all the hardware you're buying. This has always been my major stumbling block with Linux (other than the crappy fonts :). I had Linux running on a Dell laptop five years ago and struggled to get things like suspend and the graphics display working correctly. This should address that. And the deal will ensure Canonical has a vested interest in making sure it works.
And vested they are. This has to be huge for Canonical. I always wondered how they intended on making money given all the investment they've made in Ubuntu to make it consumer friendly just to give it away for free. This is it, and I shouldn't be surprised. I've always thought that you can't make money on Linux by selling it in a box. You make money selling services or other products that leverage Linux, just as we do with Eclipse.
This has clearly been Canonical's strategy and they've started down a road where it'll pay off. Making Ubuntu consumer friendly has actually created a market for Linux PCs. Dell is just responding to the desires of that market. Mind you, it's not for the weak at heart and it really was a huge gamble, but luckily for Canonical, their CEO Mark Shuttleworth came with deep pockets to make it happen. But happening it is.
Subscribe to:
Comments (Atom)