As anyone who has followed my past work knows, software architecture is a particular interest of mine. I find the subject fascinating, but my interest is not entirely selfish.
Understanding architecture, and the trade-offs that different architectures imply, is an important part of any software project. Whether you're discussing a Content Management Platform like Drupal, a language like PHP, or a particular web site, having a solid understanding of the "big picture" is crucial not only for building the system right in the first place but for communicating that architecture to others.
To be able to speak and think about the design of your system properly, though, you need to understand the trade-offs that come with it. There is no such thing as a free lunch, and designing a system to be powerful in one way invariably tends to harm it in another. It is important to know what your priorities are before you start building; and in a distributed collaborative environment like Drupal to all agree what those priorities are, at least to a large extent.
Let us therefore examine those priorities and the trade-offs they require.
Architectural patterns
Software architecture is the process of structuring a large logical system. There are many ways to do so, and a number of common patterns and approaches for doing do. Architectural patterns are a sort of a more generalized case of design patterns; classics such as Model-View-Controller and Presentation-Abstraction-Control come to mind, as well as others less frequently seen on the web such as Pipes and Filters (the entire basis of the Unix command line) or the not at all pornographic Naked objects.
Different architectural patterns are not inherently good or bad. In fact, thinking of them as such is self-defeating. Different patterns are more or less appropriate given the nature of the system and its priorities. Without an explicit understanding of those priorities it is impossible to speak intelligently about what architectural pattern is appropriate.
The classic example here is not from software at all, but from cars. (Really, what discussion of computer is complete without a car analogy?) The 1960 Chevrolet Corvair featured a swing axle suspension system. That type of suspension is more commonly found on sports cars as it changes the handling of the car in a way that makes sense for sports car drivers but most sedan drivers aren't used to, as it results in less of the tire staying on the road during turns. That's fine if you're going for a sporty handling but not very safe in typical city driving, especially if you're not used to it. The result was a much more unsafe car, not because the suspension was bad but because it was inappropriate for a sedan. (It also launched the career of Ralph Nader. True story.)
In some cases, the "null architecture" is even appropriate; this is the "I don't think about architecture, I just add code until it works" approach, sometimes called "monolithic architecture". That is still an architectural pattern, and there are even cases where it is appropriate (usually for very small or short-lived projects).
Axis of Architecture
When considering the appropriateness of an architectural decision, there are a number of common factors to consider:
- Modifiability
- How easy is it to change the way the program works later?
- Extensibility
- How easy is it to tack on additional functionality, or take away existing functionality?
- Testability
- Is the code structured in a way that makes it easy to separate out parts and unit test them?
- Verifiability
- Can we prove, not just think, but mathematically prove that the code is correct in all cases?
- Performance
- Does the program run quickly? How fast does it get through the task at hand?
- Scalability
- How well does the system scale to lots of traffic? (Hint: Scalability is not the same as performance, although improved performance usually improves scalability.)
- Usability
- How easy is it for the end-user to use and leverage the resulting system?
- Understandability
- How easy is it for developers to understand and leverage the system? This is especially important for APIs. (Barry Jaspan referred to this as DX at one point.)
- Maintainability
- All software requires updating and bug-fixing. How easy is it to do that?
- Expediency
- How long does it take to actually, you know, write the damned thing?
What's more, these different axes are frequently at odds with each other. Extensibility and Modifiability, for instance, usually go hand in hand but make Verifiability and Testability extremely hard. Performance usually (but not always) helps Scalability, but Scalability can sometimes harm performance through over-abstraction. Maintainability and Expediency are often an either-or question, as writing cleanly extensible and maintainable systems is hard. The Perl language is frequently described as a "write-only language", because syntactically it strongly favors Expediency and Performance over Maintainability or Understandability.
Drupal architecture
Drupal has, implicitly, favored certain factors over others. That's not a bad thing, but it is important to understand, and agree on, what our priorities are and why we have them.
For instance, Drupal has always emphasized Extensibility. In fact, I'd argue that has traditionally been our most important architectural priority (except when it hasn't been) thanks to the hooks system. However, that extensibility has come at the cost of Testability and Verifiability; Even with the major push for automated testing in Drupal 7, which has been incredibly beneficial, Drupal is architecturally difficult to impossible to properly unit test. Unit testing requires completely isolating a piece of code so that it can be analyzed in a vacuum. Hooks, by design, make isolating a piece of code nearly impossible. Code anywhere in the system could affect almost anything, and you can't control what a user decides to install.
Is that a good trade-off? If you're building a site-specific module, yes. If you're trying to debug an issue, no.
Drupal's hook system and standardization on Giant Undocumented Arrays of Doom(tm) is great for Modularity and Extensibility, because you can do pretty much anything anywhere by just adding an alter hook. However, if you're not already used to this completely proprietary design pattern it is completely incomprehensible and terrible for Understandability. (Even if you are used to it, it's still terrible for Understandability and Maintainability.) And if you've come from a background that uses more conventional techniques or an academic background, you're likely to run screaming and in fact many people do.
Is that a good trade-off? If your target developer is site-specific casual developers, yes. If your target developer is someone who already has extensive experience developing for any other system (PHP or otherwise) or has an academic background in CS, no.
For Drupal 7, there was an explicit decision by many developers to emphasize Scalability. That's not at all a bad decision, but in some cases that came at the cost of performance. The best example here is Field API storage. It is now pluggable, which is great and allows for non-SQL back ends to be dropped in. However, the extra abstraction that required makes the code harder to follow and makes combined storage harder. Similarly, the Expediency of getting a working SQL storage driver in place necessitated throwing out the dynamic table schema used by CCK in Drupal 6, which means a huge increase in JOINs and therefore a reduction in Performance.
Was that a good trade off? If you're Examiner.com or Sony BMG or Acquia's Drupal Gardens, yes. If you're a small non-profit, church, or personal site on shared hosting, no.
From some perspectives, Drupal 7 will be a huge, massive leap forward. From others, it's a huge, massive leap backward. That depends on what your needs and priorities are.
That's not to say that Drupal 7 is bad, or that the people building it (myself included) did something wrong.
Looking forward
Well, actually it does. Do we know what our priorities are? If forced to decide if it's worth sacrificing some extensibility for verifiability, or vice-versa, what would we decide?
Which is more important to make fast: The 95% of the market that runs on cheap shared hosting and has no PHP developers available to it, or the 5% of the market that runs its own server cluster and is more than happy to install MongoDB and Varnish and has four full time PHP developers, and therefore pays the salary of the people working on Drupal in the first place?
If there were a way to make Drupal faster for both of those groups, but at the expense of Modifiability and Extensibility, should we do it?
If there were a way to make Drupal easier for new developers to understand but at the expense of performance, should we?
If we can make Drupal easier to use for new site builders but at the expense of making it harder to develop for, should we?
These are the important questions that we need to be collectively asking ourselves. At the same time, we need to stop lying to ourselves and thinking that we can have our cake and eat it to.
What trade-offs are we willing to make?
you forgot contrib
While you lament the loss of the dynamic table mechanism of CCK (nevermind that it came with a conversion code that was a horrible bloat and horribly insafe too) you forgot that it has been added as a contrib module called Per Bundle Storage.
This whine is old and rather frustrating. We had this debate almost two years ago and PBS got written then http://groups.drupal.org/node/18302
Paying no attention to what others do (or want to do), labeling the strength of Drupal as Doom(TM) while being oh so sure of theory really makes me cringe with fear for the future. You will build and build and build until you have a tower that very few people can scale.
Not Expediency
You have conviniently , just to prove your point, forgot a few things which are nicely document on groups.drupal.org about why the current storage was chosen.
The only time when it matters is if you want to pick certain fields out of an entity as said on http://groups.drupal.org/node/9297 . This is unsupported anyways because it would not go through the proper load hooks. Once it goes through the load hooks, you can use entitycache.
Where is your performance loss now?
(Let's not mention that a certain someone was rather agitated about to making sure remote fields are supported which of course immediately nukes this old style "let's JOIN field tables together" idea.)
And finally: we did not deserve this
When these decisions were made , Examiner.com did not even exist (not with this concept anyways) much less using Drupal so I squarely refuse painting Examiner.com as an enemy of small Drupal websites. No, NowPublic.com neither had any intentions of using NoSQL back then (it still does not use it, does not need it). Rather, NowPublic / Examiner was a staunch ally of Drupal since the very beginning, sponsoring since the Antwerp developer sprint back in 2005.
Also, I can't imagine what made you to paint Acquia as being against small websites when Garden is just that: a real lot of small websites where , of course , the difference in sites makes it practically impossible to do site-specific optimizations.
"Enemy" is not the point
chx, please do me the favor of reading what I write. I did not call Examiner.com or Acquia "The enemy" of anything, nor did I say either company was "against" small web sites. I said that architectural trade-offs that benefit high-end sites (of which Examiner.com, Acquia, and Sony BMG are high-profile examples) can easily hurt performance of smaller sites. The reverse is also true: Architectural decisions that would make Drupal (or any other system) really fast on $5/month shared hosting would likely make it far less scalable and therefore a poor choice for million-hit-per-week sites.
I like the flexibility of per-field storage. I was and am fully in favor of it. But it has trade-offs and consequences, and to pretend that our development choices don't have consequences is simply foolishness.
By simplifying this post down to "enemies" you are demonstrating precisely the lack of perspective and understanding I'm talking about. These architectural decisions are not intrinsically good or bad; they are not "enemies". They all have different Pros and Cons that may or may not be worth the trade off in different cases.
And if we want Drupal to be able to play in both the "weekend non-profit button clicker" space and the "top 50 sites on the web" space (and I do), then we need to understand and appreciate the different trade-offs those markets require and try to balance them as best we can.
There are no enemies here.
Hm, I read this
So you think this does not read as "we sacrified 95% of the market for the good of 5%"?
Full quote
The full quote includes "the 5% of the market... has four full time PHP developers, and therefore pays the salary of the people working on Drupal in the first place". That's also the 5% that gets us noticed, gets more people coming into Drupal because it shows we're cool, and is frankly more fun to work on.
Did we "sacrifice 95% of the market for the good of 5%"? And if we did, was that a bad decision?
I am not answering that question here. I am asking it, and saying that we need to consider that question carefully. I am not declaring anyone or anything a villain.
On site size, there needs to
On site size, there needs to be a further division of priorities, since Examiner, Sony BMG and Drupal Gardens are very different Drupal installations.
Drupal Gardens is a big Drupal installation, but I doubt any of the individual sites are over the average in terms of the amount of content in any one Drupal site, or traffic for that matter. If you don't have a lot of content in any one database, whether you use per-field SQL storage or MongoDB won't make a great deal of difference (especially if you're also serving most of your content via the page cache). Even the worst query with temp tables and filesorts will completely reasonably quickly if there's < 1000 items in the node table. While writing this I also checked with pwolanin whether Drupal Gardens uses memcache for caching, and they don't. So if you have a very simple site running on shared hosting, I don't think the field storage is going to make any difference, nor do pluggable caching backends, and you also won't notice if things are a few ms slower.
It's simply not the case that pluggable systems like caching and field storage only benefit the biggest Drupal installations - what they benefit is sites with lots of content and/or a high proportion of authenticated page requests. That could be Examiner.com, or it could be the very first Drupal site I started back in 2005 - which has user-contributed content and a forum, runs on a budget of about $100/month (now, was more like $10/month in 2005), and which prompted most of my work on performance and scaling long before I started doing paid Drupal work at all (to avoid that limited budget getting any bigger - precisely because it's a not-for-profit site run by someone who didn't even hack on the weekends much when it was first built).
On the field storage decisions, I think it's important when discussing this to separate the pluggable field storage from the per-field default SQL storage - these were two, largely separate issues. Having pluggable field storage did not result in the per-field storage - in fact the field storage hooks were added precisely so that it'd be easy to implement alternatives, after the fact iirc.
In terms of Mongo specifically - back when Drupal 5.0 was released, pretty much no sites were using memcache and you had to apply core patches to make it work (and still do for 5.x I think, or use Pressflow). In about 2008 I set up memcache on my first ever VPS with Drupal 6 following a howto - and that allowed me to avoid upgrading to a more expensive VPS by taking some load off the database. Assuming mongodb gets adopted by more high-traffic/lots-o'-content Drupal 7 sites, which I hope it will, then when that eventually results in an entityFieldQuery backend for Views and other niceties, I imagine we'll start to see people running MongoDB on VPS/single-server setups to avoid having to deal with the much more expensive business of hand-tuning SQL queries, pre-generation, denormalization and all the other work that D6 sites currently have to worry about with CCK.
Deployability
That one is the major stumbling block I see with Drupal. Its simply impossible to reliably and efficiently deploy code and more importantly content between development, staging and production systems.
Aside from this, the non OO approach imho was a nice solution back when OO know how was scarce in the PHP community, but it has outlived itself. To the point where its a much greater wtf than OO is to even beginners.
Next post
For code, the Features module and similar export-to-code approaches have, I think, helped enormously. It's still not perfect, but there have been considerable improvements in that regard. For the Butler project, we're looking at building everything in code first with the user-configuration as an architectural after-thought rather than the more typical other way around. That's specifically for the code deployment question. For content, yeah, that's a hard one. :-) Greg Dunlap has indicated he wants to step up and focus on that in Drupal 8, which I think is fantastic.
I partially agree with you about OO vs. non-OO, but not entirely. My next post is applying these same principles to programming language paradigms, of which OO and procedural are but two examples. (PHP is nice in that you do get a choice of several different paradigms.) Stay tuned. :-)
It is possible to deploy drupal reliably and efficiently
It is possible to deploy drupal reliably and efficiently — I do it all the time. Granted; It's not easy. IMO it is key to have an intimate understanding of Drupal, what is in code, what is in the database, what can be moved from database to code and how, as well as development tools to push DBs from prod to staging environments quickly and easily to test the deployment and upgrade path.
Kudos for opening a very refreshing discussion
In the life cycle of all open source projects, architectural decisions are made, and the best thing is to make them as conscious as possible, so that members of the community are as aware as possible that they are being made.
Thanks Larry for creating a profound context within which that discussion can go forward.
Victor Kane
consistency and conscious architecture
One of the things I hope we can more towards in Drupal 8 is a much greater consistency in terms of how entities are saved and loaded. There is too much legacy of ancient Drupal versions in user, node, and other modules. This is more a question of technical debt and a need to consciously re-architect, which is not especially fun or sexy, but I think would be a huge boon to the developer experience. Right now some of the code flow in the entity system is nearly impossible to follow.
In general I hope we can focus more on making conscious architectural decisions in advance, and I think there has been a lot of positive effort in this direction at the last two Druplacons.
How does that fence feel
Lots of questions, not much opinion. Most of this is obvious IMO. Of course we are trading off between large and small site needs. Of course software architecture and patterns exist and are useful. Lets hear some opinions on these tradeoffs. Lets hear some proposals on the path forward. Hope that's coming in a future blog post.
"Obvious" is a relative term
Architectural analysis is obvious once you understand it, but I don't think we've really thought about it consciously. That's the goal here. If we start thinking about these subjects consciously that helps everyone involved better justify their opinions on those trade-offs (rather than "it sucks, it's slow, it's inflexible, bah!") and we can actually analyze those proposals moving forward. I make no claim of having the only proposals or opinions worth implementing, although I certainly have those I think are worth implementing :-). Understanding the context in which those opinions and proposals are made is important in evaluating them.
This post is actually part 1 of a series. The rest are coming soon. I don't know yet when they'll start having concrete proposals, but I think Butler is a big enough proposal to start with for now. :-)
In the eye of the beholder
While I agree that more meat is good, I feel that laying out the skeleton as has been done here is very valuable. This information may be obvious to some, but I don't believe it is obvious to most. I think most of the better and more influential techies in Drupalland would benefit from spending a little more time talking to the rest of us at the DUGs, in the #drupal-support IRC channel and others, in the forums, and maybe reading some of the blog entries out there. The "beginner" stuff actually seems to be most of it. I believe you'd quickly see that most Drupal developers do not have this level of understanding, not even close, not even coders using Drupal for years.
That's just the coders. A large proportion of Drupal developers are designers, project leads, amateurs or non-techies -- and a couple that I know read this post -- so I think it's a mistake to assume that in-depth software knowledge is obvious to most programmers, let alone everyone using Drupal. Even among the coders there is a ton of variety in the worlds the come from, from straight out of college to those who've been working on non-web work for decades and are still adjusting to very different ways of thinking about software.
I think that more blog posts like this are needed. Let's get even more obvious. Let's reduce the, "Move up to my level. It's easy!" viewpoint and expand the, "This is how you get to where I am," viewpoints. Lets create bridges between the knows and the don't-knows. Let's have more understanding. Let's remove as many techie barriers as we can. Lets make that path to becoming a contributor a much shorter, simpler and rewarding one.
And let's talk about it. Dries talks about catering for both the large and the small sites during his keynotes, that our goal is to get more enterprise clients but still make it easier to develop and run small sites. Maybe even to remove coders from the equation completely. So if it is indeed obvious that, "Of course we are trading off between large and small site needs," then it's worth talking about where these two perspectives intersect.
Nail on the head
Grant, that's a perfect summary of my goal here. :-) I have a few more articles like this in the pipeline. The goal is to try and build a common vocabulary and basis for discussing some of the very complicated and hard questions facing Drupal as a large-scale system. If we really want to crowd-source Drupal, we need to establish a baseline level of knowledge in order to do so... and then raise as many people as possible up to that baseline and beyond.
Emerging higher order system
Don’t we want to get most of factors instead of making a trade-off? Could we have win-win situations by creativity?
I don’t have any answers, but there is a tendency in evolutionary studies in respect to towards conflicts. The easy once get solved, but the harder, more robust one usually leads to higher order (and more dynamic) systems. May some of the architectural trade-offs lead to higher order dynamics?
Isn’t butler an architectural innovation? How does it relate to other suggestions?
For example during Drupalcon cph keynote presentation, Rasmus suggested a deploy mechanism for performance. Maybe Drush could evolve to this? A deploy mechanism could for example get first information about the server (e.g. MongoDB or not) and about the specific distribution (small-core or other pre-packed distribution). Depending on the information another design could be deployed. (Could future Drupal versions have dynamic designs?)
I understood that research on Context-Oriented Programming (COP) is relevant in this respect. They even change the code during run time. I’ve got no clue if such a thing is even slightly possible for Drupal, but it seems as the discussion on "context" (see butler group) may enable opportunities to develop COP stuff in the future.
I agree that we should have more conscious architectures. Considering the complexity and the amount of changes, I’m a bit afraid off out-dated versions. Therefore it seems better to visually what we are doing. For example the unit testing allows us to see what we do. I know one of my colleagues did some research on using logical programming to retrieve the OO designs from the actual code. I’ve got no clue how well such tools work today, but like unit testing, they seems the best way to manage a distributed system.
Architecture
McDonalds sells more hamburgers than ever before, and yet people also buy more and more organic food. People drink more water, and at the same time, we're consuming more and more sugared caffeinated drinks. The world is at odds sometimes. Contradictions are everywhere. ;-)
Similarly, we have some architectural dept in Drupal. At the same time, we also have a huge architectural advantage compared to any other Open Source PHP-based CMS -- at least in my view of the world.
I, too, am obviously worried about the smaller sites. Acquia's Drupal Gardens is actually a large collection of small sites so unlike you suggest, relative to Drupal Gardens, we don't care too much about the architectural changes in Drupal 7. Our biggest challenge for Drupal Gardens is Drupal's usability, not the underlying architecture. As Drupal Gardens grows, I expect Acquia will actually start to care more and more about the smaller Drupal sites ...
Every major release cycle has a theme or two. This post certainly inspires me about what these themes should be for Drupal 8. Thanks for the great blog post. Regardless of what these theme will be, we'll certainly continue to make architectural improvements. We always have.
"Small" is a misleading term
Gardens is aimed at hosting small-volume sites, yes. However, Acquia has the engineers and know-how to run it on finely tuned Apache, master/slave MySQL servers, memcache, Varnish, Solr, MongoDB if Acquia decided to go that route, etc. (I don't know off hand which of those Gardens includes, but they're all things Acquia could easily do if they wanted to.) That's something that sites hosting with random $10/month hosts don't have. That's more the difference I was going for, although you're right that Gardens is an interesting cross-over in that sense.
As with Examiner above, I don't mean that Acquia is anti-small site; I mean that the architecture that would benefit Gardens may not work so well for self-hosting small sites, and vice-versa.
And yes, I think it's obvious by now that "architectural unity and cleanliness" is a theme I'm pushing hard for Drupal 8 as well. :-)
Acquia has the engineers and
I already posted this above, but there's no particular reason for Acquia to use most of the technologies you mentioned. Solr makes sense not only for performance, but also for features (but that's offered as a service in itself, including to sites on shared hosting), and they apparently use varnish for page caching. Any hosting company should tune Apache, it's a shame most don't. For the rest, MySQL load balancing, memcache and mongodb are all added complexity which won't benefit small low-traffic sites - I'm sure they'll offer all or some of these as part of high end hosting plans, but would be surprised if that's implemented on Gardens in the near future.
In fact at the moment I not sure mongodb would be a good fit for Sony BMG either - as far as I know their 'full-time PHP developers' work on the multi-site platform which powers the 100+ artist sites, but I doubt they have a PHP developer working on each of those 100 sites - otherwise why invest so heavily in Views and Panels. And at least at the moment you need a dedicated developer for the specific site to make use of mongodb field storage (I'm sure that'll change during the D7 release cycle though).
This isn't just about the specific examples you chose - the performance and scaling issues faced by sites differ massively - a site with millions of nodes might serve 10,000 page views/month. A site with ten nodes might serve millions of pages/month. If you have ten nodes, millions of page views/month, but all anonymous users, you could probably run that on shared hosting too just by enabling css/js aggregation and page caching. Add a forum to that ten node site and you could very quickly need a dedicated server once a couple of hundred people register and start posting.
Yet another counter-example is that we're starting to see the rise of Drupal-specific hosting - for example http://omega8.cc offers Varnish, Solr and memcache for $17/month. $17/month is more than $5/month but it's also in the same region as the 'deluxe' plans from godaddy or dreamhost. So once performance requirements go beyond the most basic shared hosting, there's really not much of a jump any more. What's probably the hardest part of that is finding out what memcache and varnish are and realising that you need them in the first place (and if you had to pay someone to tell you, that'd likely cost more than the extra hosting you'd be paying for).
Lots of complexity
Yes, there's lots of complexity to these trade-offs, and it's not just "big vs. small". I only had so much space to fill though before people got tired of me talking so I had to simplify it. :-) The point is that if we structure the system for one set of trade-offs it may harm other setups. A method that helps both may harm some other architectural desire (usability, perhaps, or testability).
I'm trying to lay out the framework by which we evaluate those decisions intelligently.
Embrace complexity
I very much like the "architectural unity and cleanliness" as theme, I would like to mention that architectural innovation are opportunities to solve new contradictions. So the idea of a “framework by which we evaluate those decisions intelligently” is great and keep in mind the new opportunities that rise with it. We may not need to create trade-offs. Like Dries mentions, contradictions are quit natural. I tried to express this earlier on as the emerging of higher order systems.
We talked about one such an opportunity at drupacon, if we could systematize the way you had to go trough the code to find the missing architectural issues for Butler, we could create new tools like organic-roadmaps. If we can have the trade-offs more formally described, we could create deploy mechanisms, etc. I hope discussion like this will lead to more architecture: there is no end to creativity !
Unlike the very angry ranters
Unlike the very angry ranters above, I thought this was a very insightful post revealing the decision process of the Drupal core designers.
The comments also reveal a lot. :)
Debate
Interesting discussion and interesting comments.
@moshe although this may be obvious to some it is invaluable that it is clearly spelt out. Not everyone has been involved with Drupal core develolemnt from the start and understanding how we got to where we are is important.
@chx and @crell there is little to be gained from turning this into some sort of fight. When writing one should be careful to avoid setting themselves up for attacks on issues that are secondary to the main argument and when replying one should focus on what is really at stake and avoid giving the impression that it is somehow almost personal.
@dries opposites can exist but everything is about a trade-off. macdonalds would sell more burgers if people ate less organic food.
Consistency and clear answers as to why a system is the way it is are hallmarks of at least the attempt to have good architecture and this post is a great step in the right direction and sucesful in that it got people discussing the true issues (at least some of the time).
Moving towards enterprise is a good thing IMO
And that's why: complex system can scale itself down, reduce complexity and adapt itself to different requirements. Fast, but primitive system OTOH can't scale itself up and increase own complexity. Enterprise-level Drupal could work in "dumb but fast" mode for shared hosting, satisfying both camps.
Abstracting has its own costs, but hardware gets faster and cheaper over time too, so why not capitalize on that ?
That's like random housewife
That's like random housewife running enterprise-level Linux kernel in her netbook. If it works for Linux, why it wouldn't for Drupal ?
Not always
Some complex systems can "Scale down". Others cannot. Drupal 7 has a baseline memory footprint that is larger than Drupal 6's. That cannot go down without major refactoring. Complexity has its own costs, which are not always easy to trim out.
A well-designed architecture that is built to scale both up and down is an extremely powerful thing. It's also very hard to get right. Most "enterprise systems" have a hard time scaling down conceptually because they were built with an eye primarily toward big and complex systems. The Daily WTF is full of wonderful/awful examples.
Great article Larry! There
Great article Larry!
There are many developers out there who's sites I've inherited who took the "null architecture" approach with 30+ custom modules. It's not a pretty sight/site.
But I can't believe how many long-term core developers missed the point of the article and chose to focus on the minuscule details of your examples. Balancing these priorities is not only important for the future of Drupal core, but also the future of the entire Drupal ecosystem.
Before you can balance
Before you can balance priorities, you need to know what those priorities are, I think the gaps in the examples here point to that discussion not really having happened yet.
Arrggh, swing-axles
Bad analogy. Swing axles were a horrible attempt top get the benefits of independent suspension on the cheap. A solid axle has the virtue of keeping the rear wheels flat on the ground, at least until inertia lifts one of them off the pavement. Independent suspension, which requires expensive four universal joints, also has this virtue, but does a better job of keeping the wheels on the ground in turns.
Swing axles, with only one universal joint, allow the tire contact patch to vary, guaranteeing that you have the least contact when you need it most. No real sports cars had swing axles, or if they did (like the early Porsches), they didn't have them long. The highest profile cars to use them -- classic Mercedes 190 and 300SLs -- were more about being beautiful than handling well. The legendary 300SLR racers had independent rear suspension.
Then there is De Dion tube rear suspension, but don't get me started...
I hate threaded comments.
I hate threaded comments. Getting notifications of new comments here and impossible to follow the discussions because of the tangled comment structure. +1 for flat list.