
Following the development from large-transaction computers to microprocessors.
Introduction
Database processing was originally used in major corporations and large organizations as the basis of large transaction-processing systems. Later, as microcomputers gained popularity, database technology migrated to micros and was used for single-user, personal database applications. Next, as micros were connected together in work groups, database technology moved to the workgroup setting. Finally, databases are being used today for Internet and intranet applications.
1960: The Organizational Context
The initial application of database technology was to resolve problems with the file-processing systems. In the mid-1960s, large corporations were producing data at phenomenal rates in file-processing systems, but the data were becoming difficult to manage, and new systems were becoming increasingly difficult to develop. Furthermore, management wanted to be able to relate the data in one file system to those in another.

The limitations of file processing prevented the easy integration of data. Database technology, however, held out the promise of a solution to these problems, and so large companies began to develop organizational databases, Companies centralized their operational data, such as orders, inventory, and accounting data, in these databases. The applications were primarily organization-wide, transaction processing systems.
At first, when the technology was new, database applications were difficult to develop, and there were many failures. Even those applications that were successful were slow and unreliable: The computer hardware could not handle the volume of transactions quickly; the developers had not yet discovered more efficient ways to store and retrieve data; and the programmers were still new at accessing databases, and sometimes their programs did not work correctly.
Companies found another disadvantage of database processing: vulnerability. If a file-processing system fails, only that particular application will be out of commission. But if the database fails, all of its dependent applications will be out of commission.
Gradually, the situation improved. Hardware and software engineers learned how to build systems powerful enough to support many concurrent users and fast enough to keep up with the daily workload of transactions. New ways of controlling, protecting, and backing up the database were devised. Standard procedures for database processing evolved, and programmers learned how to write more efficient and more maintainable code. By the mid-1970s, databases could efficiently and reliably process organizational applications. Many of those applications are still running today, more than 25 years after their creation!
1970: The Relational Model
Overview
In 1970, E.F. Codd published a landmark paper in which he applied concepts from a branch of mathematics called relational algebra to the problem of storing large amounts of data. Codd’s paper started a movement in the database community that in a few years led to the definition of the relational database model. This model is a particular way of structuring and processing a database.
Benefits of the Relational Model

The advantage of the relational model is that data are stored in a way that minimizes duplicated data and eliminates certain types of processing errors that can occur when data are stored in other ways. Data are stored as tables, with rows and columns.
According to the relational model, not all tables are equally desirable. Using a process called normalization a table that is not desirable can be changed into two or more that are.
Another key advantage of the relational model is that columns contain data that relate one row to another. This makes the relationships among rows visible to the user.
At first, it was thought that the relational model would enable users to obtain information from databases without the assistance of MIS professionals. Part of the rationale of this idea was that tables are simple constructs that are intuitively understandable. Additionally, since the relationships are stored in the data, the users would be able to combine rows when necessary.
It turned out that this process was too difficult for most users. Hence, the promise of the relational model as a means for non-specialists to access a database was never realized. In retrospect, the key benefit of the relational model has turned out to be that it provides a standard way for specialists (like you!) to structure and process a database.
Resistance to the Relational Model
Initially the relational model encountered a good deal of resistance. Relational database systems require more computer resources, and so at first they were much slower than the systems based on earlier database models. Although they were easier to use, the slow response time was often unacceptable. To some extent, relational DBMS products were impractical until the 1980s, when faster computer hardware was developed and the price-performance ratio of computers fell dramatically.
The relational model also seemed foreign to many programmers, who were accustomed to writing programs in which they processed data one record at a time. But relational DBMS products process data most naturally an entire table at a time. Accordingly, programmers had to learn a new way to think about data processing.
Because of these problems, even though the relational model had many advantages, it did not gain true popularity until computers became more powerful. In particular, as microcomputers entered the scene, more and more CPU cycles could be devoted to a single user. Such power was a boon to relational DBMS products and set the stage for the nest major database development.
1980: Object-Oriented DBMS and Microcomputer DBMS Products

In 1979, a small company called Ashton-Tate introduced a microcomputer product, dBase II (pronounced “d base two”), and called it a relational DBMS. In and exceedingly successful promotional tactic, Ashton-Tate distributed—nearly free of charge—more than 100,000 copies of its product to purchasers of the then new Osborne microcomputers. Many of the people who bought these computers were pioneers in the microcomputer industry. They began to invent microcomputer applications using dBase, and the number of dBase applications grew quickly. As a result, Ashton-Tate became one of the first major corporations in the microcomputer industry. Later, Ashton-Tate was purchased by Borland, which now sells the dBase line of products.
The success of this product, however, confused and confounded the subject of database processing. The problem was this: According to the definition prevalent in the late 1970s, dBase II was neither a DBMS nor relational. In fact, it was a programming language with generalized file-processing (not database-processing) capabilities. The systems that were developed with dBase II appeared much more like those in Figure 1-10 that the ones in Figure 1-9. the million or so users of dBase II thought they were using a relational DBMS when, in fact, they were not.
Thus, the terms ‘database management system and relational database were used loosely at the start of the microcomputer boom. Most of the people who were processing a microcomputer database were really managing files and were not receiving the benefits of database processing, although they did not realize it. Today, the situation has changed as the microcomputer marketplace has become more mature and sophisticated. dBase 5 and the dBase products that followed it are truly relational DBMS products.
Although dBase did pioneer the application of database technology on microcomputers, at the same time other vendors began to move their products from the mainframe to the microcomputer. Oracle, Focus, and Ingress are three examples of DBMS products that were ported down to microcomputers. They are truly DBMS programs, and most would agree that they are truly relational as well. In addition, other vendors developed new relational DBMS products especially for micros. Paradox, Revelation, MDBS, Helix, and a number of other products fall into this category.
One impact of the move of database technology to the micro was the dramatic improvement in DBMS user interfaces. Users of microcomputer systems are generally not MIS professionals, and they will not put up with the clumsy and awkward user interfaces common on mainframe DBMS products. Thus, as DBMS products were devised for micros, user interfaces had to be simplified and made easier to use. This was possible because micro DBMS products operate on dedicated computers and because more computer power was available to process the user interface. Today, DBMS products are rich and robust with graphical user interfaces such as Microsoft Windows.
The combination of microcomputers, the relational model, and vastly improved user interfaces enabled database technology to move from an organizational context to a personal-computing context. When this occurred, the number of sites that used database technology exploded. In 1980 there were about 10,000 sites suing DBMS products in the United States. Today there are well over 20 million such sites!
In the late 1980s, a new style of programming called object-oriented programming (OOP) began to be used, which has a substantially different orientation from that of traditional programming. In brief, the data structures processed with OOP are considerably more complex than those processed with traditional languages. These data structures also are difficult to store in existing relational DBMS products. As a comsequence, a new category of DBMS products called object-oriented database systems is evolving to store and process OOP data structures.
For a variety of reasons, OOP has not yet been widely used for business information systems. First, it is difficult to use, and it is very expensive to develop OOP applications. Second, most organizations have milions or billions of bytes of data already organized in relational databases, and they are unwilling to bear the cost and resk required to convert those databases to an ODBMS format. Finally, most ODBMS have been developed to support engineering applications, and they do not have features and functions that are appropriate or readily adaptable to business information applications.
Consequently, for the foreseeable future, ODBMS are likely to occupy a niche in commercial information systems applications.
1990: Client-Server Database Applications

In the middle to late 1980s, end users began to connect their separated microcomputers using local area networks (LANs). These networks enabled computers to send data to one another at previously unimaginable rates. The first applications of this technology shared peripherals, such as large-capacity fast disks, expensive printers and plotters, and facilitated inter-computer communication via electronic mail. In time, however, end users wanted to share their databases as well, which led to the development of multi-user database applications on LANs.
The LAN-based multi-user architecture is considerably different from the multi-user architecture used on mainframe databases. With a mainframe, only one CPU is involved in database application processing, but with LAN systems, many CPUs can be simultaneously involved. Because this situation was both advantageous (greater performance) and more problematic (coordinating) the actions of independent CPUs), it led to a new style of multi-user database processing called the client-server database architecture.
Not all database processing on a LAN is client-server processing. A simple, but less robust, mode of processing is called file-sharing architecture. A company like Treble Clef could most likely use either type since it is a small organization with modest processing requirements Larger workgroups, however, would require client-server processing.
2000: Databases Using Internet Technology

Database technology is now being used in conjunction with Internet technology to publish database data on the WWW. This same technology is used to publish applications over corporate and organizational intranets. Some experts believe that, in time, all database applications will be delivered using HTTP, XML, and related technologies – even personal databases that are “published” to a single person.
Because many database applications will use Internet technology to publish databases on organizational intranets and department LANs, it is incorrect to refer to this category of application as Internet databases. The phrase databases using Internet technology should be used instead.
XML in particular serves the needs of database applications exceptionally well, and it will likely be the basis of many new database products and services.
Originally published by Dr. Robert F. Melworm, Kean University, free and open access, republished with permission for educational, non-commercial purposes.