A Method for Understanding Object Oriented Programming in the Real World
Bowers' Law: A program can not safely rise above its data structures.
OO Corollary to Bowers' Law: The primary contribution of OO as a paradigm is to allow the creation of more complex data structures then competing paradigms can.
Hey, "Bowers' Law" is a much more interesting label than "Lemma 1", no?
A programmer can try to extend a program's data structure's capabilities with code for some specific purpose, but the code for various purposes will inevitably get out of sync, the kludges necessary to compute values that should be stored as data start conflicting (usually because of different hidden assumptions), copy & paste programming proliferates, and the whole becomes difficult to extend. It rapidly collapses into something too complicated for a mere mortal to understand. This kind of complexity causes geometrically more effort to be required for every new feature... and in the worst cases, every new bug fix!
If a program must be put to new purpose, if the program's behavior must get significantly more sophisticated, the sophistication must almost always come from more sophistication in the data structures, not the code.
The optimal solution is to add a true menu to the browser. In the web arena this is generally not feasible and necessitates such hacks as in the linked menus, but just because they were forced to do it to get to the their goal does not mean they are immune to the effects. If you do use a non-cross-platform browser extension, such as ActiveX controls, then you can get very good ways to express "menus" cleanly.
OO Corollary to Bowers' Law: The primary contribution of OO as a paradigm is to allow the creation of more complex data structures then competing paradigms can.
Object Oriented Programming is grounded in the recognition that what defines a "data structure" is not the bits that represent the data itself, but in how you actually get the information in the data structure out of it and do something with it. The bits themselves are by comparision much less importent, and in the case of values computed every time they are needed, may not even literally exist.
Object Oriented programming has a 20+ year history, but in my opinion, it has only recently become truly viable around three or four years ago. This is mostly because languages that actually make it posible to use advanced OO concepts without making you pull out your hair have come into existance.
Friction matters; if it's so hard that only the gurus can do it, even the gurus frequently won't because they don't like hard work any more then the rest of us.
In my opinion, there are still not very many languages that have risen to the level of ease necessary to make these concepts really work. Python is one of the best. C++ is templates is good, though it needs a lot of knowledge and experience to make it work at full power. Perl almost cuts it in the hands of masters, but in the hands of an inexperienced programmer it's horrible. Java doesn't cut it.
This is not an exhaustive list, but the full list would be short, and the list of (existing) languages that meet the bar and either have penetrated the mainstream, or may someday penetrate the mainstream, is very short indeed.
It is easy in an object-oriented language to create a "bi-directional link"; a data structure that manages two pointers between two other structures. You abstract the operation of managing the pointer into accessor functions, and require that all users use your functions to manipulate the pointer.
In the procedural paradigm, everybody using the point is responsible for updating it correctly themselves. This hardwires assumptions about the pointer into every function that uses it.
The so-called "OO" languages help you by making it easy to tie the data to their accessor functions, and by making it easier to use the accessor functions correctly then to mess with the data directly (making it much more likely client programmers will use it correctly), but this is not strictly speaking required for OO design; it just requires a lot more developer discipline if it's gone.
When you later add complex behavior to the link (suppose you want the link to notify some centralized link tracker in some cases), these accessors can be changed without changing those who use the link abstraction. You can not always change the implementation without changing the clients, but the range of changes you can make with OO vs. traditional procedural is much, much greater. (Note that many OO advocates did claim you could change your implementation without changing the clients, but the OO-style only increases your chance of them not needing to change, it does not eliminate the possibility.)
Bowers' Law helps to understand one of the greatest mysteries of real-world OO use vs. how much OO should be theoretically used.
"Theoretically", OO is supposed to be used pervasively, such that everything is an object. For instance, Java works this way because that is how OO is theoretically supposed to work; it is not actually possible to have code in Java that is not in a class.
The reality is that some code in real systems is almost always outside of a class. Java systems are frequently implemented with some "class" that just consists of one static function, which when instantiated effectively acts as procedural code, not an object.
This incongruity troubles many OO advocates; it seems like "OO all the way down" is the correct approach, in the sense that it should bring the maximal OO benefits, yet it is not the approach that has won out in the real world. One can either assume that real-world OO users are stupid (condescending and not universally true), or that somehow the assertion that "OO all the way down is the path to maximal OO benefits" is not true.
The answer is the latter. OO is of benefit to the extent that is provides wonderfully intelligent data types. The benefits are so nice that it is typically ideal to implement almost all of the system with OO paradigms so most things can be manipulated easily. However, on the top-most level of the program, there is generally no benefit to doing that in OO. In a sense, the OO in the end just ends up serving a procedural wrapper around the data model to do what you want.
Do not take this as justification for suddenly writing reams of procedural code. But generally speaking, if you're writing a top-level program whose primary purpose in life is to manipulate objects, there's no need to feel like you're committing a crime for doing it procedurally, not in an OO fashion.
After all, most of the things a user does on a computer is procedural: "Do this operation to this thing", as in "load this webpage", "append the character I just typed to the document", etc. It should be no surprise that some amount of procedural code survives.
Thus, there are a goodly number of programs which instantiate objects, do some limited manipulation of them, and exit; the ability of Perl or Python to facilitate this usage (along with some other useful characteristics) are why they make such excellent scripting languages. (Python makes this really nice in that you may not even need to write a script if you're doing a task just once; several times I've used the Python interactive mode as a shell on illegal steroids.) This leverages the strength of procedural programming languages ("this is what I want you to do") with the strength of intelligent data structures ("I will not allow you to corrupt me, so use me freely").
What is Data-Centric OO? In practice, it's similar to conventional OO, except you don't sweat the procedural code at the top-level. (You'll find that following other good OO practices will help you explicitly make good OO-based designs for this purpose, so there's not even a real difference in designing.) But look at how the theoretical differences play out in standard OO concepts:
The definition of Encapsulation is still somewhat controversial.
In Data-centric OO, Encapsulation defines the object. You create an intelligent data structure ("object") by ensuring that all access to that structure, read and write, go through your provided functions (generally called Methods). These functions implement the intelligence of your data structures, where you may implement such concepts as "thread safety", "remote access", or complex states of multiple inter-related objects.
This encapsulation provides an Interface that you expect everybody will use to access your object. A happy side-effect is that you can then swap out your Implementation of the interface for another one, ideally without affecting the users of the interface.
An Object-Oriented Programming Language is one that provides some significant degree of programming support for this concept. Ideally, it makes it easier to use the access functions to get the data then to access the bits directly; this makes it more likely the programmer will Do The Right Thing and use your functions unless they absolutely need to do something directly.
If you have encapsulation of your data structures, you are most of the way to Data-centric OO even if you're not using an "OOPL", though you may miss much of the functionality layered on top of Encapsulation. The encapsulation itself defines the object, so for instance, if you create a database and force all access to it through stored procedures and views, you can have an object-oriented database with the associated advantages of OO, though it'll look little like a conventional OOPL. For instance, see Table Oriented Programming, a somewhat controversial paradigm (in my opinion, it's controversial not because it's a bad idea, but because the author promotes it as much better then (traditional) OO, which ruffles some feathers); to the extent that that you access your tables through a layer of abstraction (views, stored procedures, what-have-you), those are "objects" as far as I am concerned.
(Note I do not agree with everything on that site, I just hold it up as an interesting border case. Until we have better databases, I don't see Table Oriented Programming as worth pursuing.)
Further definitions and refinements of "object" are provided by various languages, sometimes for theoretical reasons, sometimes for efficiency, sometimes for ease of implementation, but the fundamental property they share is that to some degree or another, they implement encapsulation of a data structure.
To the extent users of your data structure are expected to access the bits directly, you are not doing Data-centric OO, even if you are using a putatively OOPL, and you will suffer the consequences of procedural programming language that is "manipulating data" rather then "giving a short series of simple commands".
"Polymorphism" means that you can have some interface that allows you to manipulate multiple distinct data structures with the same driver code. Even if you have a Car structure and a Boat structure, there is some way to call a "Start" procedure on both of them easily that will cause both of them to start up.
Fundamentally, you have some operation you want to do to these structures, and you have some "dispatch" mechanism to call the right code for the job automatically, based on the data structure it is being called on. Some languages have more dispatch mechanisms then others, some expose them more then others, some may be static, some dynamic, some have none at all (procedural) in which case you may have to write them yourself.
Polymorphism is desirable because it allows you to write phenomenally powerful code concisely and generically, code that may even continue to work when new data structures the original author did not even imagine are thrown at it. Polymorphism is possible because of the Encapsulation of the respective data structures; without the similarity in interfaces polymorphism is not possible because you can't do the "same thing" to multiple different types.
Polymorphism is a derived result from encapsulation, not a fundamental requirement of OO... but if you program in such a way as to prevent yourself from leveraging polymorphism, you are most likely costing yourself dearly. From that point of view, it is indeed a fundamental part of understanding OO, Data-centric or otherwise.
Because of the encapsulation of the data structure, even much of the code used to maintain the data structure may be useful to different data structures. It is good to write code Once And Only Once, so it is helpful if that code can be re-used.
Some languages provide a mechanism to declare that one data structure is something called a "class" that collects the data and the access methods into one unit. Another unit can then be created that will use the the access methods of another data structure by default unless you override them, and the overrides may still call the original functions. This new unit is called a "subclass", and the reuse of the original access functions is called "Inheritance".
For the purposes of this discussion, I am calling any reuse of previous code "inheritance", be it through what is traditionally called "inheritance", or through aggregation and delegation, prototype programming, or any of the other code-reuse mechanisms.
A wide variety of class mechanisms, subclassing mechanisms, and inheritance mechanisms have been built on top of encapsulation, and no one of them seems to be the obvious winner. Each has both theoretical and practical strengths and weaknesses. These abstractions are important only to the point they make it easier to write encapsulated data structures without duplicating code.
Note that Data-centric OO is a methodology, not a programming language. A "class" is only defined in some language, but it is not a pre-requisite to practicing OO, only a convenience. Thus, it is not necessary to speak of "classes" as a base unit that is required to be considered OO... especially in light of the wide variety of "classes" with different semantics in the real world. The utility of "classes" is derived from the encapsulation, so there is no "One True Class" sematic that will almost be the best. Classless OO is possible; these are called the "Prototype" languages, but data-centric OO still encompasses those methodologies.
Classes have proved useful as an organizing principle in the real world but are not fundamental to OO the way encapsulation is.
IMHO, this is the biggest difference between traditional OO and Data-centric OO. With Data-centric OO, you are no longer concerned about OO as a paradigm; it is instead a way of looking at data.
As such, it can be applied to any situation where you are manipulating data. It is not in conflict with Functional Programming considered as "a programming language that treats functions as just more data, allowing them to be passed as first-order parameters"; indeed its data handling ideas empower that style of programming even more.
I don't think it's actually in conflict with real Functional Programming, either; "methods" are just glorified function calls on a complex structure that is labelled with some type of class-like designation and I don't see anything special about these function calls that makes them different than normal function calls.
There's no reason you can't use OO-type methodologies in your SQL database and reap at least some of the benefits. Indeed, as I understand database best-practice, forcing all access through views and stored procedures is better then allowing direct data access anyhow, and that is encapsulation to a T.
Aspect Oriented Programming is a way of preventing the replication of code; theoretically it is orthogonal to everything else, including OO, though at the moment it is primarily used in OO.
Under this understanding, OO is just a way of looking at data, not The Programming Paradigm To End All Paradigms. This works much better in the real world, and accounts for successes in the real world much better then the Paradigm To End All Paradigm theory.
The utility of OO should not be underestimated, but the benefit is an emergent property of Encapsulation, not something that can be deliberately designed for necessarily. Anything else that incorporates "encapsulation" can also offer many of the benefits of OO.
Encapsulation is how "the recognition that what defines a 'data structure' is not the bits that represent the data itself, but in how you actually get the information in the data structure out of it and do something with it" goes from nice-sounding statement to implemented reality. Encapsulation adds an abstraction layer around the raw bits that seperates how the data is stored from what the data itself is.
Encapsulation violations are dangerous in direct proportion to how much intelligence the encapsulation adds. If the encapsulation adds effectively nothing.. consider a pure C int array and let's stipulate correct index handling in the client code... then violating any provided encapsulation is probably not an issue. ints are ints. But as the relationship in the data gets more complicated, violating the encapsulation becomes more and more dangerous, as the special properties of the data structure become more and more likely to be violated, including such special properties as "updates all users when it changes" (a very common and useful addition).
As the encapsulation is violated, all clients of the supposedly-encapsulated data structure become responsible for more and more code to correctly access the data structure. It is not possible to correct all uses of the data structure as its purpose changes; sometimes it is not possible even in principle if external users are our of your control. Eventually the data structure looses integrity entirely and becomes anti-data; worse then useless because it causes actively bad things to happen. Only impossibly heroic feats of programming can maintain functionality after this occurs...
... and why bother? It's not necessary effort.
This is not to say that encapsulation is a holy barrier; the decision to breach it is subject to standard cost/benefit analysis. But do not underestimate the costs, which is easy to do because often most or all of them are in future. The costs will be incurred.
Why did it take so long for OO to be viable? One major problem is simply that older languages made the programmer do work that we would now expect the compiler to do, and OO caused a lot of friction. But there were other problems too.
Earlier OO put the cart before the horse, placing too much emphasis on Polymorphism or Everything Is A Class or Message Passing or a number of other derivative concepts, rather than focussing on encapsulation. As a result, they tended to create rigid class implementations that locked you into The One True Pattern of OO Usage. If the particular pattern matched your needs, then there was nothing wrong. This is why C++ wasn't a "total failure"; it has some rigid ideas but it matched some domains.
Bowers' Law does not logically imply one true Object Oriented semantic system that will always be correct. Searching for the Holy Grail is pointless and potentially counterproductive, since it seems to lead to excessive confidence in one's answers, as expressed in languages that locked the programmer into some system and tried to make it impossible to use any other. (For example, Java locks you into "Everything must be a class", which as a design decision for a "modern language" just boggles my mind; this is a clear case of just wishing harder that our pet OO theory was correct, and damn the practical consequences.)
Another smaller problem was believing that encapsulation had to be enforced by the compiler at all costs under all circumstances. However, "encapsulation as security" and "encapsulation as design criterion" are two separate things, and "encapsulation as security" tends to cause a lot of friction for the programmer. You need to figure out what is "public", what's "private", and what's "protected". You need to live with other programmer's mistakes that they make with these keywords. You get punished when you build on something for a couple of months and it turns out that you got one of the settings wrong. Many modern languages are making various OO concepts much easier to use by dropping the security aspects of encapsulation entirely, or, ideally, making them optional.
For more general use, more flexible systems are needed, and the flexibility needs to be easy. As I said earlier, only recently have these languages come into existence and become viable; you could get "easy" or you could get "flexible" but not really both. Since this blocked the practical use of many of the other emergent properties that make OO so cool in practice, they did not seem compelling vs. standard procedural languages, esp. as many procedural languages could adopt many of the important capabilities of OO with a little programmer discipline.
Bowers' Law helps make sense out of OO and justify its use. It was not originally formulated with OO explicitly in mind but they go together as if it was. Many large programs, even in non-"OO" languages end up using these principles because it is one of the best ways to hold a program together.
Once you accept that this is a good way of looking at program organization, it become clear why using a good OOPL, such as Python, is so desirable. Because friction matters, anything that can help make this easier will make you more likely to reap the benefits of good design.
But you can still use this sensibility, even in languages that have no support at all for these things, even assembler, and understanding these ideas will help you leverage OO languages better, and perhaps put your mind at ease about certain theoretical aspects of OO.
This builds a conception of OO not as a monolithic paradigm, but a modular programming meta-paradigm with many distinct instantiations, each with their own value. By implication, while each of the elaborations placed on top of Data-centric OO, like Polymorphism and Inheritence, have common cases where they are not useful or may even hinder the programming effort, there is effectively never a time not to encapsulate your data structures with a layer of some kind of abstraction. The two basic exceptions are when you absolutely can't afford the overhead (in which case, use C++ which can compile much of it away), or the program is short and will never need to grow, in which case nearly no design criterion applies anyhow.