Wednesday, February 20, 2013


Developers commonly use JDeveloper to create deployment project to upload files to MDS.
To remove files from MDS, you need to use WLST tool. In fact, you can use WLST tools to import, export, delete, purge and remove MDS directories.

The following are the commonly used WLST “commands” to manipulate the MDS:

·        importMetaData – import directories and documents

·        exportMetaData – export MDS trees and documents

·        deleteMetaData – delete MDS documents (not directories)

·        purgeMetaData

·        sca_removeSharedData – remove directories and documents

These “commands” appear to be straight forward, but you can easily be tripped over if you don’t pay attention to some details.

Not All wlst.cmd Files Are Created Equal

Depends on how WLST is started, and where it’s started, you may have different “commands” available to you. Which WLST “commands” are available to you, depends on what .jar files you load into the classpath when you start WLST tool.
With my locally installed SOA suite, I found 6 different versions of “wlst.cmd”:

1.      C:\Oracle\Middleware\wlserver_10.3\common\bin

2.      C:\Oracle\jdev-mw\wlserver_10.3\common\bin

3.      C:\Oracle\jdev-mw\oracle_common\common\bin

4.      C:\Oracle\Middleware\Oracle_OSB1\common\bin

5.      C:\Oracle\Middleware\oracle_common\common\bin

6.      C:\Oracle\Middleware\Oracle_SOA1\common\bin

Not all wslt.cmd files in this list support MDS commands. I know #2 doesn’t work.
I was able to import and export MDS data using wslt.cmd under #3, C:\Oracle\jdev-mw\oracle_common\common\bin. But I have to use #6 (C:\Oracle\Middleware\Oracle_SOA1\common\bin) for sca_removeSharedData command to work.

Examples Using WSLT to Work with MDS

cd c:\Oracle\jdev-mw\oracle_common\common\bin
connect(‘weblogic’, ‘welcome1’, 't3://localhost:7001')

To export data
exportMetadata(application='soa-infra', server='soa_server1', toLocation='c:/junk/mdsout', docs='/apps/test/**')

Assuming I have some files in MDS under “/apps/test” tree, this command will create the “c:\junk\mdsout\apps\test” directory, this directory will contains all non-empty directories and files in  MDS under ‘/apps/test’ tree.

1.      You may need to do: exportMetadata(application='soa-infra', server='soa_server1', toLocation='c:/junk/mdsout', docs='/apps/test/**', remote=’true’) if your server is remote (not localhost).
2.      You may need to do: exportMetadata(application='soa-infra', server='soa_server1', toLocation='c:/junk/mdsout/test.jar', docs='/apps/test/**') if your server is remote (not localhost) depends on which version of wlst.cmd you use.
3.      This commands only dumps out non-empty tree structures in MDS

To import data
importMetadata(application='soa-infra',server='soa_server1',fromLocation='C:/junk/mdsout', docs='/apps/test/**')

This will import data files under c:\mdsout\apps\test into MDS tree “apps/test”.
This command will not import empty directories. Like export command, you may need to import .jar file instead of directories.

To Delete Data
This will only delete all files under MDS /apps/test tree, it will not delete the directories.

To Remove the Directories
sca_removeSharedData('http://localhost:8001', ‘test’) - will remove MDS tree “/apps/test”
sca_removeSharedData('http://localhost:8001', 'test/dvm')  - will remove MDS tree “/apps/test/dvm”

Be careful, do not add / in ‘test’ or ‘test/dvm’, otherwise WLST complains cannot find documents to remove.



Use JCA file adapter to Parse CSV file with master and detailed records

JCA file adapter can parse single record type CSV file easily. It also has limited support to parse mixed record type CSV files. However, it relies on the record data starts with a fixed values.
In my case, I have a CSV file with master-detailed records that look like this:
Master c1, master c2, master c3
 a, b, c
Det c1, det 2, det 3, det 4, det 5
1, 2, 3,4,5

Since my detailed records do not start with a fixed value (detail row 1 starts with 1, row 2 starts with 6), I cannot use JCA wizard to parse this file directly. Here is how I managed to do it. My solution is to create the two file adapters and parse the same data file twice. First parse the master record, then parse the detailed records.

For master record:

1.      Create a copy of the sample data file, and remove the detailed records
2.      Remove the spaces in the master header row
3.      Generate XSD with JCA adapter native file wizard, select uniform file, use 1st line as header
4.      After XSD (let’s call it header.xsd) is generated, make these changes in the XSD:

·        nxsd:headerLinesTerminatedBy="${eol}" – make sure it’s like this.
·        nxsd:headerLines="1" – this is a misnomer, it simply means how many lines to skip
·        nxsd:hasHeader="false"     -- sounds contradictory to the line above. But the actual data file has spaces in the header title, this will make the parser skip header line
·        nxsd:dataLines="1"               so it only reads 1 line of data

Follow similar steps for detailed-records; remove the master head and records from sample data file, after XSD (call it body.xsd) is generated, make the following changes:

·        nxsd:hasHeader="false" – so it won’t parse header records, because spaces in header cause problems
·        nxsd:headerLines="3" – this will skip 3 lines in the data file
·        nxsd:headerLinesTerminatedBy="${eol}" – make sure the headers line is terminated properly
In the BPEL process, create two JCA file adapters.
The first adapter is called to load (parse) the header record into BPEL, choose header.xsd to when create the adapter. Remember, header.xsd skips the first line of the data file, and parse the 1st line, since we set dataLines=”1”, it will skip rest of the detailed records.
Also make sure modify the .jca file, so after header record is loaded, do not delete the data file:
  <property name="DeleteFile" value="false"/>

Create the 2nd file adapter and select body.xsd. Based on the XSD, the 2nd adapter will skip the first 3 lines (master record header, data and detailed record header.

With this two-pass approach,  you can load both header and detailed records into BPEL.