In this paper we evaluate the use of case-based classification to resolve a number of questions related to information
sharing in the context of an Integrated Web services Brokering System (IWB). We are developing the IWB to
independently decompose and analyze ad hoc Web services interface descriptions in order to identify Web services of
interest. Our approach is to have the IWB cache information about each service in order to support an autonomous
mediation process. In this mediation process, the IWB independently matches the user's data request to the correct
method within the appropriate Web service, translates the user's request to the correct syntax and structure of the Web
service request, dynamically invokes the method on the service, and translates the Web service response. We use casebased
classification as a means of automating the IWB's analysis of relevant services and operations. Case-based
classification retrieves and reuses decisions based on training data. We use sample Web Service Description Language
(WSDL) files and schema from actual Web services as training data in our approach and do not require the service to
pre-deploy an OWL-S ontology. We present our evaluation of this approach and performance ratings in the context of
meteorological and oceanographic (MetOc) Web services as it relates to the IWB.
Web Services are becoming the standard technology used to share data for many Navy and other DoD operations. Since Web Services technologies provide for discoverable, self-describing services that conform to common standards, this paradigm holds the promise of an automated capability to obtain and integrate data. However, automated integration of applications to access and retrieve data from heterogeneous sources in a distributed system such as the Internet poses many difficulties. Assimilation of data from Web-based sources means that differences in schema and terminology
prevent simple querying and retrieval of data. Thus, machine understanding of the Web Services interface is necessary
for automated selection and invocation of the correct service. Service availability is also an issue that needs to be
resolved. There have been many advances on ontologies to help resolve these difficulties to support the goal of sharing
knowledge for various domains of interest.
In this paper we examine the use of case-based classification as an alternative/supplement to using ontologies for
resolving several questions related to knowledge sharing. While ontologies encompass a formal definition of a domain of
interest, case-based reasoning is a problem solving methodology that retrieves and reuses decisions from stored cases to
solve new problems, and case-based classification involves applying this methodology to classification tasks. Our
approach generalizes well in sparse data, which characterizes our Web Services application. We present our study as it
relates to our work on development of the Advanced MetOc Broker, whose objective is the automated application
integration of meteorological and oceanographic (MetOc) Web Services.
KEYWORDS: Web services, Data communications, Data fusion, Internet, Meteorology, Data integration, Associative arrays, Analytical research, Computer security, Human-machine interfaces
Web Services are being adopted as the enabling technology to provide net-centric capabilities for many Department of Defense operations. The Navy Enterprise Portal, for example, is Web Services-based, and the Department of the Navy is promulgating guidance for developing Web Services. Web Services, however, only constitute a baseline specification that provides the foundation on which users, under current approaches, write specialized applications in order to retrieve data over the Internet. Application development may increase dramatically as the number of different available Web Services increases. Reasons for specialized application development include XML schema versioning differences, adoption/use of diverse business rules, security access issues, and time/parameter naming constraints, among others.
We are currently developing for the US Navy a system which will improve delivery of timely and relevant meteorological and oceanographic (MetOc) data to the warfighter. Our objective is to develop an Advanced MetOc Broker (AMB) that leverages Web Services technology to identify, retrieve and integrate relevant MetOc data in an automated manner. The AMB will utilize a Mediator, which will be developed by applying ontological research and schema matching techniques to MetOc forms of data. The AMB, using the Mediator, will support a new, advanced approach to the use of Web Services; namely, the automated identification, retrieval and integration of MetOc data. Systems based on this approach will then not require extensive end-user application development for each Web Service from which data can be retrieved. Users anywhere on the globe will be able to receive timely environmental data that fits their particular needs.
The future requires military operations and intelligence communities to more heavily rely on Internet-based solutions for the delivery of MetOc data and products to the warfighter in an automated manner. These issues are being addressed by Tactical Environmental Data Services (TEDServices). TEDServices is being engineered by the Naval Research Laboratory, the Naval Oceanographic Office and the Naval Undersea Warfare Center, with sponsorship from Space and Naval Warfare Systems Command (SPAWAR) PMW-150. TEDServices was successfully demonstrated during April 2004, in FBE-Kilo, and is in transition to the US Navy this fiscal year. This paper will describe how TEDServices has been engineered to provide solutions to issues routinely confronted by warfighters. These solutions include, but are not limited to, better bandwidth usage, automated data ordering, simplification of data management, automated data transformations, forward deployed data caching, simplified integration with legacy tactical decision aids and support for joint interoperability.
The Naval Research Laboratory's Digital Mapping, Charting and Geodesy Analysis Program is investigating the application of wavelet technology to terrain approximation in 3D mapping. The wavelet transform allows us to obtain the frequency content of gridded elevation data while retraining the spatial context. We use a 2D discrete wavelet transform (DWT) to reduce Digital Terrain Elevation Data to low and high frequency components. The low frequency components represent widespread fluctuations in terrain and over large areas give a very close approximation to the original data set. Each application of a wavelet transform gives us a 75% reduction in the amount of data that must be displayed. A level 2, 2D DWT allows us to represent large amounts of terrain data with only 6.25% of the original data. A reverse transform on the reduced data set makes possible the restoration of any level up to the original data with only minor loss, making the application suitable for multi-resolution systems. This application is also ideal for time-critical applications. Processing 1,073,179 DTED elevations down to 67,304 takes approximately one-half second. Optimized triangulated irregular network algorithms are reported to require over 45 seconds for a similar sized data set. We describe the application of wavelet technology to Internet-based 3D mapping. In addition to custom 2D maps that may consist of vector, raster and gridded data, users may generate 3D maps by area-of-interest.
The Naval Research Laboratory's Digital Mapping, Charting, and Geodesy Analysis Program is investigating the extension of the National Imagery and Mapping Agency's Vector Product Format (VPF) to handle a wide range of non-manifold 3D objects for modeling and simulation. The extended VPF, referred to as VPF+, makes use of a non-manifold data structure for modeling 3D synthetic environments. The data structure uses a boundary representation method.
Three-dimensional terrain representation plays an important role in a number of terrain database applications. Hierarchical triangulated irregular networks (TINs) provide a variable-resolution terrain representation that is based on a nested triangulation of the terrain. This paper compares and analyzes existing hierarchical triangulation techniques. The comparative analysis takes into account how aesthetically appealing and accurate the resulting terrain representation is. Parameters, such as adjacency, slivers, and streaks, are used to provide a measure on how aesthetically appealing the terrain representation is. Slivers occur when the triangulation produces thin and slivery triangles. Streaks appear when there are too many triangulations done at a given vertex. Simple mathematical expressions are derived for these parameters, thereby providing a fairer and a more easily duplicated comparison. In addition to meeting the adjacency requirement, an aesthetically pleasant hierarchical TINs generation algorithm is expected to reduce both slivers and streaks while maintaining accuracy. A comparative analysis of a number of existing approaches shows that a variant of a method originally proposed by Scarlatos exhibits better overall performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.