I ran across this review of Eclipse Europa on eWeek that came out yesterday. It confirms almost every concern and hope that I had for the Europa C/C++ package that people have been downloading in droves from eclipse.org. And it was really interesting that this guy is a former Windows C++ developer who is very familiar with Visual Studio who is now doing Mobile development, the exact scenario that we see a lot of and is the biggest growth area for Eclipse on the C/C++ side.
The first thing that hit me was his summary: "Eclipse Europa is a solid IDE, but it could use more refined packaging for the Windows platform". If you're a regular reader of my blog, what more can I say.
The sore points that he ran into really home for me. His biggest complaint was the install. He had expected the C/C++ IDE package to include the gcc compiler. There are still way too many steps to getting this package to be useful for C++ developers. "Nothing leaves a more sour taste in a Windows user's mouth than an application not working properly, or requiring additional manual configurations, after clicking finish on the installation wizard's final panel."
He also had trouble dealing with the Eclipse workspace paradigm. Visual Studio is much more flexible about what files are included/excluded from a project. This is an area we really need to deal with to make these guys comfortable.
He had some good things to say too, though, and he really showed why I think Eclipse will be attractive to Windows developers once we clean things up. The CVS integration is unbeatable. He loved the CDT editor and navigation features including CDT's new Call Hierarchy view. It's these features that really bring the CDT into the mainstream.
One thing to notice, though, is that the title of the article seems to address all of Europa, at least that's what readers will see first. That's why we really need to be careful when we present Eclipse as an IDE. It isn't an IDE accross the board (not to open that debate again). But users see the word IDE and have pretty high expectations. And when it falls short, it looks bad on everyone.
Hey all. This blog records my thoughts of the day about my life on the Eclipse CDT project. I will occasionally give opinions and news regarding the Eclipse CDT - the project and its ecosystem - and on open source in general. Please feel free to comment on anything I say. I appreciate it when people are honest with me. And, please, please, consider all of these opinions mine, not of my employer.
Friday, August 31, 2007
Thursday, August 30, 2007
The Need for Diversity
I'm in shock. Amongst other emotions that I'm still trying to figure out.
I came into work this morning and checked my e-mail to find that Danny Smith from the MinGW project had sent an e-mail "Bye" to the mingw users and developers mailing list. Bye? What do you mean bye?! Just as I was getting excited about the future of MinGW with it's spanking new modern compiler, the only guy working on it has quit. I don't know what to think. Is it a bad joke? Has someone broke into his e-mail account and sent the message. The responses from the other MinGW developers leads me to believe not as they politely wished him well in his future endeavors as well as expressing their fear for the future of the project.
And fear we should. I was always concerned over the lack of progress with the MinGW compilers. The seemed stuck on 3.4.2 as the official release for a long time (and now, probably even longer). Danny had come to the rescue and offered hope that the wait was over and we'd soon be able to enjoy all the great improvements to gcc in recent years. But now it appears someone else will need to take on this challenge. And it appears to be a big challenge as there were a number of bug reports flowing in (one of which was mine) and I was getting worried that Danny would get overwhelmed.
The timing of this is interesting, especially after my blog entry yesterday. But I've also been in a number of discussions in Eclipse lately over the need for diversity for projects to succeed. If contributors to a project all come from one company, what happens to that project when the company needs those resources elsewhere. The CDT was able to survive such an incident because we had contributors from many organizations who stepped up to fill in the holes (and I still can't thank them enough :). But there are projects at Eclipse who haven't worked hard enough to make sure they diversify like this and it is something to worry about if you rely on such projects.
And that's the position I find myself in. I was relying on MinGW's 4.2 compiler to make Wascana a super appealing environment for Windows development, even for commercial use. Now, I'm not sure what I'll do. Maybe it's time to apply some focus again on the Windows SDK compiler and debugger integrations. Although, unless by some miracle Microsoft let's me redistribute their SDK, it violates Wascana's primary mission as a simple to install complete IDE. And I doubt I would have ample time to contribute to MinGW and I don't really have the expertise anyway. And I have QNX work piling up. And CDT stuff to prepare for. Like I said, I'm still trying to figure this whole thing out...
I came into work this morning and checked my e-mail to find that Danny Smith from the MinGW project had sent an e-mail "Bye" to the mingw users and developers mailing list. Bye? What do you mean bye?! Just as I was getting excited about the future of MinGW with it's spanking new modern compiler, the only guy working on it has quit. I don't know what to think. Is it a bad joke? Has someone broke into his e-mail account and sent the message. The responses from the other MinGW developers leads me to believe not as they politely wished him well in his future endeavors as well as expressing their fear for the future of the project.
And fear we should. I was always concerned over the lack of progress with the MinGW compilers. The seemed stuck on 3.4.2 as the official release for a long time (and now, probably even longer). Danny had come to the rescue and offered hope that the wait was over and we'd soon be able to enjoy all the great improvements to gcc in recent years. But now it appears someone else will need to take on this challenge. And it appears to be a big challenge as there were a number of bug reports flowing in (one of which was mine) and I was getting worried that Danny would get overwhelmed.
The timing of this is interesting, especially after my blog entry yesterday. But I've also been in a number of discussions in Eclipse lately over the need for diversity for projects to succeed. If contributors to a project all come from one company, what happens to that project when the company needs those resources elsewhere. The CDT was able to survive such an incident because we had contributors from many organizations who stepped up to fill in the holes (and I still can't thank them enough :). But there are projects at Eclipse who haven't worked hard enough to make sure they diversify like this and it is something to worry about if you rely on such projects.
And that's the position I find myself in. I was relying on MinGW's 4.2 compiler to make Wascana a super appealing environment for Windows development, even for commercial use. Now, I'm not sure what I'll do. Maybe it's time to apply some focus again on the Windows SDK compiler and debugger integrations. Although, unless by some miracle Microsoft let's me redistribute their SDK, it violates Wascana's primary mission as a simple to install complete IDE. And I doubt I would have ample time to contribute to MinGW and I don't really have the expertise anyway. And I have QNX work piling up. And CDT stuff to prepare for. Like I said, I'm still trying to figure this whole thing out...
Wednesday, August 29, 2007
The True Meaning of Wascana
While the progress on Wascana has been slower than I may have liked, it is progressing. And I've been very pleased with the positive feedback I've received on it. Almost everyone I've heard from says it's the right solution at the right time. A complete CDT IDE is hard for people to set up themselves, especially for noobs, and that is Wascana's primary mission in life, to make this easier.
But there is another reason for Wascana, and one I use to justify spending some of my work time on it. I've often seen marketing staff from various vendors promote their Eclipse-based tools as, well, Eclipse-based tools. Now in the Java space, that definitely means something. But in the embedded world, it doesn't have the same punch. It's almost like customers are saying "yeah, so?".
This has been the main driver behind my work on improving the CDT for the "grassroots" segment of our industry. These are the guys just getting into programming, or are doing it as a hobby, or people working in a start-up. People who don't have a lot of money to spend on expensive tooling but who would benefit from a good free IDE. And while there are good IDEs out there for free, there is so much more upside to Eclipse.
But I had reached a road block in my pursuit of supporting the grassroots. We had reached the point that their biggest hurdle was setting up the CDT with a good compiler and debugger and set of run-time libraries. This is the stuff that Microsoft's Visual C++ has always been good at. And if you look around, thanks mainly to the growth of Linux, there is getting to be a pretty good set of open source tools and libraries.
And I guess that's why the time is right for Wascana. I think we can build a pretty good free open source IDE from all this, and the feedback I've received is that it will be very popular. And if that becomes true, then commercial products based on Eclipse will benefit from the extra visibility and the investment will have be worth it. So while I've had to pursue Wascana out on SourceForge due to licensing and IP requirements on Eclipse projects, I consider Wascana to be an important part of the CDT, both for the desktop developers who want a good open source IDE based on it, and for commercial vendors who want their CDT-based IDEs to be successful.
But there is another reason for Wascana, and one I use to justify spending some of my work time on it. I've often seen marketing staff from various vendors promote their Eclipse-based tools as, well, Eclipse-based tools. Now in the Java space, that definitely means something. But in the embedded world, it doesn't have the same punch. It's almost like customers are saying "yeah, so?".
This has been the main driver behind my work on improving the CDT for the "grassroots" segment of our industry. These are the guys just getting into programming, or are doing it as a hobby, or people working in a start-up. People who don't have a lot of money to spend on expensive tooling but who would benefit from a good free IDE. And while there are good IDEs out there for free, there is so much more upside to Eclipse.
But I had reached a road block in my pursuit of supporting the grassroots. We had reached the point that their biggest hurdle was setting up the CDT with a good compiler and debugger and set of run-time libraries. This is the stuff that Microsoft's Visual C++ has always been good at. And if you look around, thanks mainly to the growth of Linux, there is getting to be a pretty good set of open source tools and libraries.
And I guess that's why the time is right for Wascana. I think we can build a pretty good free open source IDE from all this, and the feedback I've received is that it will be very popular. And if that becomes true, then commercial products based on Eclipse will benefit from the extra visibility and the investment will have be worth it. So while I've had to pursue Wascana out on SourceForge due to licensing and IP requirements on Eclipse projects, I consider Wascana to be an important part of the CDT, both for the desktop developers who want a good open source IDE based on it, and for commercial vendors who want their CDT-based IDEs to be successful.
Monday, August 27, 2007
Too clever for me...
Brian Kernighan, of Kernighan and Richie fame, or K&R (if you did't know, Richie created the C language and Kernighan helped him right the book on it). At any rate, Brian has this famous quote which I can't seem to find the root source but I found it quoted many times:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
And that perfectly explains my frustration in the last couple of weeks trying to understand two very clever code bases. Not only is debugging twice as hard, but being a new guy trying to understand the code is at least twice as hard. Hmm, maybe here's a new quote for people:
"Learning code is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, the only one who'll be able to understand it."
Unless the person learning the code is twice as smart as you. Or something... At any rate, I'm not twice as smart as most people so I get to struggle trying to figure this stuff out.
BTW, as I was searching around for the source of the Brian's quote, I ran across this interesting interview with him that gives a glimpse of the human side of Unix and C as they were created at Bell Labs many years ago now.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
And that perfectly explains my frustration in the last couple of weeks trying to understand two very clever code bases. Not only is debugging twice as hard, but being a new guy trying to understand the code is at least twice as hard. Hmm, maybe here's a new quote for people:
"Learning code is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, the only one who'll be able to understand it."
Unless the person learning the code is twice as smart as you. Or something... At any rate, I'm not twice as smart as most people so I get to struggle trying to figure this stuff out.
BTW, as I was searching around for the source of the Brian's quote, I ran across this interesting interview with him that gives a glimpse of the human side of Unix and C as they were created at Bell Labs many years ago now.
Lunar Eclipse filling my Inbox
Hey gang, there's a lunar eclipse Tuesday beginning at 3:51 CDT. I personally can't wait until it's over so that my google alert on "eclipse cdt" can stop reporting this thing...
Friday, August 24, 2007
From Old to New
I was hunting and pecking around looking to see what is happening in the industry as I do probably too regularly (I really got to get some code done...). At any rate, I ran across some slide shows that ZDNet were showing on old computers. I still remember the buzz and excitement us young geeks had as computers hit our neighbourhood streets. I don't think I'll ever see something like it again.
Anyway, one of the pictures was of the first computer I ever typed a program into. It was an HP-85 (click here for the real site that ZDNet borrows the pictures from) that my best friend's dad used at his work for the Fisheries Department at Government of Manitoba office in town. And it was where it all started for me and it was cool to see the picture. Yeah the thing had a tiny screen and a proprietary CPU, but it did speak BASIC and I remember being excited trying to figure it out.
Of course that is contrast to the latest computer, or at least processor, that caught my eye, Tilera's TILE64, a monster 64-core machine organized as a System-on-Chip (SoC, peripheral interfaces included). It especially sparked my interest because of the market it's trying to address, embedded systems for video and advanced networking. Intel can go on about their server and desktop monster multi-core machines, but there is a real need in the embedded space for this technology too. I can imagine some pretty wicked things that embedded devices could do with automation and robotics and such with this kind of horse power.
But as with all monster multi-core machines coming out, I still think we need a better way to program them so that we don't get lost in the complexity of getting our programs to do multiple things at exactly the same time. Hell, I spent a good part of the last couple of days solving a deadlock issue in the CDT, and that was just two threads colliding...
Anyway, one of the pictures was of the first computer I ever typed a program into. It was an HP-85 (click here for the real site that ZDNet borrows the pictures from) that my best friend's dad used at his work for the Fisheries Department at Government of Manitoba office in town. And it was where it all started for me and it was cool to see the picture. Yeah the thing had a tiny screen and a proprietary CPU, but it did speak BASIC and I remember being excited trying to figure it out.
Of course that is contrast to the latest computer, or at least processor, that caught my eye, Tilera's TILE64, a monster 64-core machine organized as a System-on-Chip (SoC, peripheral interfaces included). It especially sparked my interest because of the market it's trying to address, embedded systems for video and advanced networking. Intel can go on about their server and desktop monster multi-core machines, but there is a real need in the embedded space for this technology too. I can imagine some pretty wicked things that embedded devices could do with automation and robotics and such with this kind of horse power.
But as with all monster multi-core machines coming out, I still think we need a better way to program them so that we don't get lost in the complexity of getting our programs to do multiple things at exactly the same time. Hell, I spent a good part of the last couple of days solving a deadlock issue in the CDT, and that was just two threads colliding...
Monday, August 20, 2007
A lesson in scalability
I just read the Skype blog where a Skypian describes (well glossed over, but we get the gist) what happened with their two day outage last week. I don't use Skype very much but I know a few people that use it for their work and were at least inconvenienced by it. I read the report with somewhat the same reasoning that one watches Nascar, to see the big wreck and find out how it happened. But reading these things helps you think about how you could avoid such wrecks in your day job, so it's useful reading.
The story goes that 30 million computers around the world running Skype all downloaded a Windows Update and did a restart, all at the same time. I always wondered how Microsoft's servers could keep up with that, but I guess they did very well. But when those 30 million Skype users all tried to log into Skype after their restart all at the same time, bad things started to happen and everyone got booted off the system.
Now, being a software professional, it's not clear to me how this outage could last two days. Normally, you get a timeout if the server is busy and after some amount of time you retry. You'd think there would be a variability in the timeout so that everyone doesn't retry all at once, but maybe that was the flaw they found.
But the lesson of the day is to always consider the "impossible" since sooner or later, it may not be impossible. We run into that with the CDT. We find users who take the CDT and import any old project they may have and expect the CDT's parsers to find everything. In a lot of cases, we're fine, but we definitely don't take into consideration all possible scenarios. And I think that will be the next phase of CDT's lifecycle, to reach that maturity where our feature set does work on more and more projects and we can have more and more happy users added to our community. Openning our minds to the impossible will help us get there.
The story goes that 30 million computers around the world running Skype all downloaded a Windows Update and did a restart, all at the same time. I always wondered how Microsoft's servers could keep up with that, but I guess they did very well. But when those 30 million Skype users all tried to log into Skype after their restart all at the same time, bad things started to happen and everyone got booted off the system.
Now, being a software professional, it's not clear to me how this outage could last two days. Normally, you get a timeout if the server is busy and after some amount of time you retry. You'd think there would be a variability in the timeout so that everyone doesn't retry all at once, but maybe that was the flaw they found.
But the lesson of the day is to always consider the "impossible" since sooner or later, it may not be impossible. We run into that with the CDT. We find users who take the CDT and import any old project they may have and expect the CDT's parsers to find everything. In a lot of cases, we're fine, but we definitely don't take into consideration all possible scenarios. And I think that will be the next phase of CDT's lifecycle, to reach that maturity where our feature set does work on more and more projects and we can have more and more happy users added to our community. Openning our minds to the impossible will help us get there.
Just when you needed a Boost
I've been aware of the Boost C++ library for quite a while, but in the context I had to deal with it, it was painful. The Boost library is a collection of C++ templates intended as a trial ground for additions to the Standard Template Library that is part of the C++ standard. Some of them according to Bjarne, have made it into the next standard C++0x. But they stretch C++ templates to the limits, and as such, stretched the CDT's C++ parser to it's limits and broke it. In the early days of the CDT, we eventually just skipped it.
But lately, Markus on the CDT team has been testing his indexer work with Boost, and I've had a number of requests from people to include it in Wascana. So I decided to take a fresh new look at it.
Now I was expecting some simple container templates and utilities and such. And there were things that are much needed like a threads package for multi-threading your app and a regular expression utility class.
But I was amazed at some of the big constructs they have there. The first thing I ran into is a complete lexer/parser subsystem including a preprocessor. With that, it wouldn't take to long to build parsers in C++, and maybe even a C++ parser.
As well, there is a Statechart engine. This is something I've dealt with a lot in my past and it was cool to see a solution that involved templates and States as objects and some of the neat tricks it used to implement action code. Whether it scales to real size state machines, I'd have to dig deeper to see.
I've always been amazed at how powerful C++ templates can be and how the compilers can take all this template code and specializations and such and optimize it down to some pretty efficient code that you probably would have written by hand. But with templates, you work at a higher level of abstraction, meaning higher productivity. Boost gives you some pretty powerful abstractions. We'll see how easy they are to use in practice.
But lately, Markus on the CDT team has been testing his indexer work with Boost, and I've had a number of requests from people to include it in Wascana. So I decided to take a fresh new look at it.
Now I was expecting some simple container templates and utilities and such. And there were things that are much needed like a threads package for multi-threading your app and a regular expression utility class.
But I was amazed at some of the big constructs they have there. The first thing I ran into is a complete lexer/parser subsystem including a preprocessor. With that, it wouldn't take to long to build parsers in C++, and maybe even a C++ parser.
As well, there is a Statechart engine. This is something I've dealt with a lot in my past and it was cool to see a solution that involved templates and States as objects and some of the neat tricks it used to implement action code. Whether it scales to real size state machines, I'd have to dig deeper to see.
I've always been amazed at how powerful C++ templates can be and how the compilers can take all this template code and specializations and such and optimize it down to some pretty efficient code that you probably would have written by hand. But with templates, you work at a higher level of abstraction, meaning higher productivity. Boost gives you some pretty powerful abstractions. We'll see how easy they are to use in practice.
Thursday, August 16, 2007
Debugging the Debugger
I've been trying out MinGW's new 4.2.1 gcc compilers. As I mentioned previously, they're experimental. But I've gotten really good feedback from people that moving to 4.2.1 is a great move and will help make MinGW a serious choice for developers.
They actually have two variants of gcc that they're working on. One of them supports exception handling based on the debug information gathered using the DWARF standard. It's apparently much more efficient than the default one based on setjmp/longjmp. I'm not sure what that all means, but my take is that the dwarf version is better.
At any rate, I had a problem using the dwarf version that I didn't have using the default (sjlj) version. If I specify the path to a file using Windows traditional back slashes, e.g. ..\main.cpp, gdb got confused and I couldn't set a breakpoint on a line. And, unfortunately, CDT's builder builds files this way and I my breakpoints failed to get set.
So, I downloaded the source to MinGW's gdb, configure and built it and set up a debug session, all within the CDT (this worked since configure generates forward slashes). I was able to set breakpoints, look at the dwarf symbol data that gdb was trying to use and found where the line number info was missing. And with that information, I was able to generate a hopefully helpful bug report that the MinGW developers can take, or if I find the time, I can try out different solutions. The only trouble I had was making sure which gdb was which :).
At any rate, this brought home again why I love using IDEs for development (which gave me a great intro for an article I'm writing). The productivity of using a debug environment that provides point and click visualization of debug information has to be at least ten-fold over using command line debuggers, and maybe a hundred-fold over using printfs. Once you start using it, you'll never go back.
They actually have two variants of gcc that they're working on. One of them supports exception handling based on the debug information gathered using the DWARF standard. It's apparently much more efficient than the default one based on setjmp/longjmp. I'm not sure what that all means, but my take is that the dwarf version is better.
At any rate, I had a problem using the dwarf version that I didn't have using the default (sjlj) version. If I specify the path to a file using Windows traditional back slashes, e.g. ..\main.cpp, gdb got confused and I couldn't set a breakpoint on a line. And, unfortunately, CDT's builder builds files this way and I my breakpoints failed to get set.
So, I downloaded the source to MinGW's gdb, configure and built it and set up a debug session, all within the CDT (this worked since configure generates forward slashes). I was able to set breakpoints, look at the dwarf symbol data that gdb was trying to use and found where the line number info was missing. And with that information, I was able to generate a hopefully helpful bug report that the MinGW developers can take, or if I find the time, I can try out different solutions. The only trouble I had was making sure which gdb was which :).
At any rate, this brought home again why I love using IDEs for development (which gave me a great intro for an article I'm writing). The productivity of using a debug environment that provides point and click visualization of debug information has to be at least ten-fold over using command line debuggers, and maybe a hundred-fold over using printfs. Once you start using it, you'll never go back.
Tuesday, August 14, 2007
The Master Speaks, Bjarne's vision of the future
I'm not sure whether you'd call him a Jedi master, or a Dark Lord. I guess that depends on your opinion of C++. To me, I've always affectionately called him Barney (which I'm sure he'd hate), and, of course, I treasure my copy of "The C++ Programming Language", the "Barney Book".
Bjarne Stroustrup, the inventor of C++, recently gave a rare public talk at the University of Waterloo, Canada's top university for computer science. The topic of the talk is the new version of C++ called currently C++0x (he mentions that if it slides into 2010, they may just call it C++0xa, and yes he has a pretty good sense of humour). But he also talked a lot about the past and present of C++. You can download the talk here but be warned it's huge and you may want to use the bittorent.
I'm a long term fan of C++ since I used it for my grad studies work back in 1989. It was a no brainer to me that it became so popular. Bjarne was able to bring object-oriented constructs and generics to C programmers without compromising on performance. And C++0x has performance square in it sites as it works to clean up some of the complexities of the language and bring new concepts that desperately need standardization like threads.
He also had some great examples why performance is criticial, even in today's world of fast computers with lots of memory. Embedded systems have always had performance as high priority, and in the world of mobile, high performance also means using less power, which makes power consumption a performance issue. Also, if your application uses less memory and is faster, that leaves more resources available to add more functionality and making the system even more useful.
So while the world still seems to be jumping on the Java/C# or Ruby/PHP/Python bandwagons, C++ still and always will have it's place. 3 million C++ programs can't be all that wrong...
Bjarne Stroustrup, the inventor of C++, recently gave a rare public talk at the University of Waterloo, Canada's top university for computer science. The topic of the talk is the new version of C++ called currently C++0x (he mentions that if it slides into 2010, they may just call it C++0xa, and yes he has a pretty good sense of humour). But he also talked a lot about the past and present of C++. You can download the talk here but be warned it's huge and you may want to use the bittorent.
I'm a long term fan of C++ since I used it for my grad studies work back in 1989. It was a no brainer to me that it became so popular. Bjarne was able to bring object-oriented constructs and generics to C programmers without compromising on performance. And C++0x has performance square in it sites as it works to clean up some of the complexities of the language and bring new concepts that desperately need standardization like threads.
He also had some great examples why performance is criticial, even in today's world of fast computers with lots of memory. Embedded systems have always had performance as high priority, and in the world of mobile, high performance also means using less power, which makes power consumption a performance issue. Also, if your application uses less memory and is faster, that leaves more resources available to add more functionality and making the system even more useful.
So while the world still seems to be jumping on the Java/C# or Ruby/PHP/Python bandwagons, C++ still and always will have it's place. 3 million C++ programs can't be all that wrong...
Monday, August 13, 2007
Can we help these guys be successful?
I received a Google alert pointing at from this developer who is making the transition from Windows to Linux. Obviously, to do so, he has to drop Visual Studio as his IDE of choice and has picked Eclipse and the CDT for his Linux work. Despite some of the difficulties in setting up the CDT, he picked it as "rivaling Linux's best IDEs".
This transition was one of the scenarios we thought about in the early days of the CDT that would make the CDT successful for desktop development. But it is really only recently that I am starting to see this become more and more common. I think Ubuntu was desktop Linux's tipping point. And as Linux becomes more popular there, as the momentum in the press seems to be showing, I think there will be more and more developers looking at Linux when building their applications. And the CDT, technically, is positioned well here.
But the one thing that made me disappointed with the gentleman's post was the frustration he had setting up the CDT to work properly for him. And, you know, this frustrates me as much as it does these people. We've made long strides at improving the CDT, especially in this area, but it appears the message isn't getting out, and maybe we missed something.
And to be honest, the people that are getting the CDT from Eclipse directly and trying to make it work like this aren't getting the support they need to be truly successful with the CDT. If you get your CDT from a commercial vendor, then you have a much better chance since the vendors generally make sure the environments work properly for their customers (we do our best at QNX to do so for our customers). But the committers working on the CDT are kept very busy by their employers, as they should, and that is making it very difficult to get a focused effort to make the CDT truly work well out of the "open source" box.
And this comes back to the Platform versus IDE debate. The CDT at Eclipse.org is a Platform. It's not an IDE. It misses several components that make it a true IDE. Support for people trying to use it as an IDE is probably the biggest one missing. I'm trying to do something about it in open source with the Wascana project, but sometimes I get the feeling that I'm fighting a losing battle. And without success on the desktop, I don't think the CDT can truly be Uber.
Or maybe I'm just feeling down since my holidays are over and I have a ton of work piled up...
This transition was one of the scenarios we thought about in the early days of the CDT that would make the CDT successful for desktop development. But it is really only recently that I am starting to see this become more and more common. I think Ubuntu was desktop Linux's tipping point. And as Linux becomes more popular there, as the momentum in the press seems to be showing, I think there will be more and more developers looking at Linux when building their applications. And the CDT, technically, is positioned well here.
But the one thing that made me disappointed with the gentleman's post was the frustration he had setting up the CDT to work properly for him. And, you know, this frustrates me as much as it does these people. We've made long strides at improving the CDT, especially in this area, but it appears the message isn't getting out, and maybe we missed something.
And to be honest, the people that are getting the CDT from Eclipse directly and trying to make it work like this aren't getting the support they need to be truly successful with the CDT. If you get your CDT from a commercial vendor, then you have a much better chance since the vendors generally make sure the environments work properly for their customers (we do our best at QNX to do so for our customers). But the committers working on the CDT are kept very busy by their employers, as they should, and that is making it very difficult to get a focused effort to make the CDT truly work well out of the "open source" box.
And this comes back to the Platform versus IDE debate. The CDT at Eclipse.org is a Platform. It's not an IDE. It misses several components that make it a true IDE. Support for people trying to use it as an IDE is probably the biggest one missing. I'm trying to do something about it in open source with the Wascana project, but sometimes I get the feeling that I'm fighting a losing battle. And without success on the desktop, I don't think the CDT can truly be Uber.
Or maybe I'm just feeling down since my holidays are over and I have a ton of work piled up...
Thursday, August 09, 2007
MinGW gcc 4.2.1 now available
Danny Smith from the MinGW project has posted his work on porting gcc-4 to the MinGW platform. From the mingw-user mailing list, it looks like a lot of people were waiting patiently for it. One thing I've learned in my days working in open source, it really makes you mad when someone complains that a release or some feature isn't ready yet. But, I know I'm glad it's finally here, even if it is currently experimental (with the target of 4.3 being the official version).
So what's the big deal with gcc-4 versus gcc-3? The biggest thing is a new optimization framework which promises to be able to generate faster code even accross functions. This is an area where commercial compilers excel. And gcc-4 brings MinGW into the mainstream being releases only days after the official 4.2.1.
The other interesting thing is that the port also supports OpenMP, a standard API and language extensions to support parallel programming. I know the PTP people will be very interested in that. Also, there is support for Objective-C++, which we've had numerous requests to support with the CDT.
I've started updating the Wascana installer to include this new compiler and a couple of other updates it requires. Tune into the web site to find out when 0.9.3 will arrive (with all it's fancy new branding too :)
So what's the big deal with gcc-4 versus gcc-3? The biggest thing is a new optimization framework which promises to be able to generate faster code even accross functions. This is an area where commercial compilers excel. And gcc-4 brings MinGW into the mainstream being releases only days after the official 4.2.1.
The other interesting thing is that the port also supports OpenMP, a standard API and language extensions to support parallel programming. I know the PTP people will be very interested in that. Also, there is support for Objective-C++, which we've had numerous requests to support with the CDT.
I've started updating the Wascana installer to include this new compiler and a couple of other updates it requires. Tune into the web site to find out when 0.9.3 will arrive (with all it's fancy new branding too :)
Wednesday, August 08, 2007
Man, I could use a paint shop plug-in
So I'm stealing away a couple of hours here and there on my second set of holidays to work on Wascana. One of the things I've done is created a logo. You can see it on the home page. It's pretty simple, an orange moon to represent a lunar eclipse, and a blue 'W' for Wascana. I made it using Paint.Net, a photo editor written in C# (for some reason...). But it was pretty handy since it supported layers and let me draw simple shapes like the circle and text.
The next phase of the exercise was to then use the logo to create images for the icons and bitmaps in Eclipse. I wasn't too happy with how Paint.Net scaled the images and I wanted more control over the properties of the circle and text. I tried a number of different programs, an old version of Paint Shop Pro (when it was still owned Jasc) and GIMP for Windows. Neither did what I wanted.
But that got me thinking. Why isn't there a plug-in for Eclipse which does this. I am making bitmaps for my .product file. Eclipse is for everything and nothing in particular, why not image editing? (And this plays into one of my dreams for Eclipse - and why not take to the next level and do 3D model editing too, both needed for game development - but I digress.) Does anyone know of any activity in that area?
The next phase of the exercise was to then use the logo to create images for the icons and bitmaps in Eclipse. I wasn't too happy with how Paint.Net scaled the images and I wanted more control over the properties of the circle and text. I tried a number of different programs, an old version of Paint Shop Pro (when it was still owned Jasc) and GIMP for Windows. Neither did what I wanted.
But that got me thinking. Why isn't there a plug-in for Eclipse which does this. I am making bitmaps for my .product file. Eclipse is for everything and nothing in particular, why not image editing? (And this plays into one of my dreams for Eclipse - and why not take to the next level and do 3D model editing too, both needed for game development - but I digress.) Does anyone know of any activity in that area?
Wednesday, August 01, 2007
Eclipse CDT, More than Just for Writing Code
I think I've gotten more Google Alerts about "Eclipse CDT" in the last couple of weeks than I ever have. It's a great sign that people want to talk about their experiences, good or bad.
The latest one I got was from something called the "Reality Factory 2 Development Weblog". The author had struggled with the performance of CDT's content assist in Callisto, and I understand why. In Callisto it did a full parse including all the header files included by the file in the editor, which for C++ with lots of templates and stuff, it took quite a while. In Europa, one of the features my intern worked on over the winter was to migrate this to use the index for the contents of the header files. It's way faster, and the author was very happy to see it.
It's also interesting to see the comparisons with Visual Studio. The author states that in many cases, Eclipse and the CDT are actually faster than VS and he's switched back to Eclipse. That's cool to hear. It's been a while since I used Visual Studio regularly but it's looking like we're reaching one of the bars we set for ourselves of being as good as VS (being as good as JDT is the other one, BTW). This makes me believe even more that Wascana is important as a path for VS users who want to switch to Eclipse with all the components they'd expect of an IDE. This is definitely one of the main growth areas for the CDT I see in the coming years.
And the author of the blog entry really brought home why that's true. It's not just Eclipse and CDT's support for writing code. It's all the other plug-ins available to developers to help them in all areas of software development, including managing bugs with Mylin and source code repositories with Subversive. Even the built-in web browser gets kudos for helping out. Taking a look at the big picture of the Eclipse ecosystem, you really do get a sense of the value proposition of Eclipse. Embedded and enterprise developers get it, and we just need a vehicle to get that to the desktop developer too.
The latest one I got was from something called the "Reality Factory 2 Development Weblog". The author had struggled with the performance of CDT's content assist in Callisto, and I understand why. In Callisto it did a full parse including all the header files included by the file in the editor, which for C++ with lots of templates and stuff, it took quite a while. In Europa, one of the features my intern worked on over the winter was to migrate this to use the index for the contents of the header files. It's way faster, and the author was very happy to see it.
It's also interesting to see the comparisons with Visual Studio. The author states that in many cases, Eclipse and the CDT are actually faster than VS and he's switched back to Eclipse. That's cool to hear. It's been a while since I used Visual Studio regularly but it's looking like we're reaching one of the bars we set for ourselves of being as good as VS (being as good as JDT is the other one, BTW). This makes me believe even more that Wascana is important as a path for VS users who want to switch to Eclipse with all the components they'd expect of an IDE. This is definitely one of the main growth areas for the CDT I see in the coming years.
And the author of the blog entry really brought home why that's true. It's not just Eclipse and CDT's support for writing code. It's all the other plug-ins available to developers to help them in all areas of software development, including managing bugs with Mylin and source code repositories with Subversive. Even the built-in web browser gets kudos for helping out. Taking a look at the big picture of the Eclipse ecosystem, you really do get a sense of the value proposition of Eclipse. Embedded and enterprise developers get it, and we just need a vehicle to get that to the desktop developer too.
Subscribe to:
Comments (Atom)