Bick Group

David Linthicum

Subscribe to David Linthicum: eMailAlertsEmail Alerts
Get David Linthicum: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: SOA & WOA Magazine

SOA & WOA: Article

Service Archaeologist

The hot job for 2005

Want to leverage your enterprise's Web services? Chances are you'll be enabling or exposing existing application services and not building new. This should come as no surprise to anyone. However, while we've been focusing on the development of new services, how to do it, and what tools to use, most of the work that I see coming is learning how to translate and expose legacy services. So, my prediction is that one of the best paid jobs in 2005 will be engineers specializing in Web services enablement of legacy systems. I'll call them service archaeologists.

Indeed, most corporate data and business processing resides on mainframe computers; any attempt to move to an SOA has to include legacy/mainframe integration. Mainframes are not the mainframes of yesteryear. The good news is that most mainframes today are designed to work and play well with other systems within your IT infrastructure, and offer many points of integration, both at the information and the service levels. The bad news is that traditional mainframe-based applications were not designed to communicate outside of the processing space, and thus with some types of legacy systems Web service enablement is an unnatural act. However, there are always mechanisms to provide integration, it's just a matter of understanding what your requirements are, and selecting the correct technology.

Common Issues for "The Dig"
Legacy systems, such as mainframes, have three common issues that you must take into consideration: unstructured data, batch-oriented, binding of logic and data.

Mainframe information has a tendency to be unstructured, meaning that the information is contained in flat files or index sequential files, and not a database. This is challenging to those looking to expose that data into their SOA since a lot has to be understood pertaining to how the data is stored and retrieved. While this can make integration difficult, it's not impossible. In some instances changes must be made to mainframe programs to externalized unstructured data, and in some cases you can account for the unstructured data within the middleware layer through data abstraction facilities.

The fact that many legacy/mainframe systems are batch-oriented presents its own set of problems and opportunities. SOAs typically don't deal with batch systems; information moves between systems record by record, service by service. So, in many instances changes must be made to internal processing on the mainframe to support a more granular data exposure mechanism. However, in some cases it may make better sense to design your SOA in such a way as to provide the ability to accept and produce large amounts of batch data. Services can be written to accomplish this task.

Finally, many mainframe-based information systems do not have a clear separation of data and logic, such as those that leverage file-based information (e.g., ISAM/ Cobol). This is more of a challenge to understand than integration. Indeed you must understand how the code is designed to access the information, and then make a decision to expose the mainframe code as a service, sometimes requiring changes to the code. Or, access the information on its own, if possible, binding the data to services outside of the legacy system.

Mainframe Access Mechanisms
Your choices for enabling technology to access mainframe-based systems are numerous. Typically your hardware vendor will assist you in selecting the proper integration technology for your requirements, but you need to understand all of the potential solutions available in the marketplace which are becoming numerous.

To access mainframe-based processes you may use Logical Unit 6.2 (LU6.2). This is IBM's device-independent, process-to-process protocol that provides the facilities for peer-to-peer communications between two programs and also supports asynchronous networking. This mechanism allows you to leverage an internal mainframe process, and if needed, expose it as a service using the LU6.2 as the access point, "wrapping" it as a Web service. You can build services on top of LU6.2 yourself, or leverage a middleware layer that manages the internal process to Web Services translation for you (using LU6., or other interfaces).

For instance, as I'm writing this column Cape Clear Software and NEON Systems, Inc., announced that the two companies are working together to enable organizations to simplify the integration of their mainframe applications and data using Web Services and service-oriented architecture (SOA). The agreement includes tight integration between Cape Clear's Enterprise Service Bus (ESB) and NEON Systems' Shadow z/Services product, allowing organizations to integrate their mainframe applications and to provide those applications as services to the rest of the network.

To expose mainframe data as services to your SOA, you may leverage database gateways. These are also known as SQL gateways and are APIs that use a single interface to provide access to most databases that reside on many different types of platforms. They are similar to virtual database middleware products, providing developers with access to any number of databases residing in environments that are typically difficult to access, such as a mainframe. For example, using an ODBC, JDBC, or Web services interface and a database gateway, developers can access data that resides in a DB2 database on a mainframe, or in an Oracle database running on a minicomputer, and in a Sybase database running on a UNIX server. The developer simply makes an API call, and the database gateway does all the work, including distributing the query and assembling the data, perhaps exposed as a single virtual schema or view.

Database gateways translate the SQL calls into a standard format known as the Format and Protocol (FAP), the common connection between the client and the server. FAP is also the common link between very different databases and platforms. The gateway can translate the API call directly into FAP, moving the request to the target database, and translating the request so that the target database and platform can react.

Clearly, the use of legacy systems will continue, and as we move quickly to SOA, so will the enablement of these systems. The trick is to identify the services that are important to the enterprise now and create a plan for exposing those services including which enabling technology and standards to apply. Once you've done that prepare yourself for some complexity. None of these systems were designed to expose services, and it's not going to be an easy job no matter how good the translation and wrapping technology is. It could, in most cases, mean that you need to change and test code over 20 years old. Oh well, that's why service archaeologists will be making the big bucks.

More Stories By David Linthicum

David Linthicum is the Chief Cloud Strategy Officer at Deloitte Consulting, and was just named the #1 cloud influencer via a recent major report by Apollo Research. He is a cloud computing thought leader, executive, consultant, author, and speaker. He has been a CTO five times for both public and private companies, and a CEO two times in the last 25 years.

Few individuals are true giants of cloud computing, but David's achievements, reputation, and stellar leadership has earned him a lofty position within the industry. It's not just that he is a top thought leader in the cloud computing universe, but he is often the visionary that the wider media invites to offer its readers, listeners and viewers a peek inside the technology that is reshaping businesses every day.

With more than 13 books on computing, more than 5,000 published articles, more than 500 conference presentations and numerous appearances on radio and TV programs, he has spent the last 20 years leading, showing, and teaching businesses how to use resources more productively and innovate constantly. He has expanded the vision of both startups and established corporations as to what is possible and achievable.

David is a Gigaom research analyst and writes prolifically for InfoWorld as a cloud computing blogger. He also is a contributor to “IEEE Cloud Computing,” Tech Target’s SearchCloud and SearchAWS, as well as is quoted in major business publications including Forbes, Business Week, The Wall Street Journal, and the LA Times. David has appeared on NPR several times as a computing industry commentator, and does a weekly podcast on cloud computing.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.