<file:write doc:name="Write" config-ref="File_Config" path="my_transform">
<file:content ><![CDATA[#[output application/csv --- payload]]]></file:content>
</file:write>
DataWeave Output Formats and Writer Properties
DataWeave 2.1 is compatible with Mule 4.1. Standard Support for Mule 4.1 ended on November 2, 2020, and this version of Mule will reach its End of Life on November 2, 2022, when Extended Support ends. Deployments of new applications to CloudHub that use this version of Mule are no longer allowed. Only in-place updates to applications are permitted. MuleSoft recommends that you upgrade to the latest version of Mule 4 that is in Standard Support so that your applications run with the latest fixes and security enhancements. |
DataWeave can read and write many types of data formats, such as JSON, XML, and many others. DataWeave supports these formats (or mime types) as input and output:
Mime Type | Supported Formats |
---|---|
|
|
|
DataWeave (for testing a DataWeave expression) |
|
|
|
|
|
|
|
|
|
For binary. |
|
For Yaml Data Format. |
|
|
|
|
|
|
|
For plain text. |
Setting Mime Types
You can specify the mime type for the input and output data that flows through a Mule app.
For DataWeave transformations, you can specify the mime type for the output data. For example, you might set the output
header directive of an expression in the Transform component or a Write operation to output application/json
or output application/csv
. This example sets the mime type through a File Write operation to ensure that a format-specific writer, the CSV writer, outputs the payload in CSV format:
For input data, format-specific readers for Mule sources (such as the On New File listener), Mule operations (such as Read and HTTP Request operations), and DataWeave expressions attempt to infer the mime type from metadata that is associated with input payloads, attributes, and variables in the Mule event. When the mime type cannot be inferred from the metadata (and when that metadata is not static), Mule sources and operations allow you to specify the mime type for the reader. For example, you might set the mime type for the On New File listener to outputMimeType='application/csv'
for CSV file input. This setting provides information about the file format to the CSV reader.
<file:listener doc:name="On New File"
config-ref="File_Config"
outputMimeType='application/csv'>
</file:listener>
Note that reader settings are not used to perform a transformation from one format to another. They simply help the reader interpret the format of the input.
You can also set special reader and writer properties for use by the format-specific reader or writer of a source, operation, or component. See Using Reader and Writer Properties.
Using Reader and Writer Properties
In some cases, it is necessary to modify or specify aspects of the format through format-specific properties. For example, you can specify CSV input and output properties, such as the separator
(or delimiter) to use in the CSV file. For Cobol copybook, you need to specify the path to a schema file using the schemaPath
property.
You can append reader properties to the mime type (outputMimeType
) attribute for certain components in your Mule app. Listeners and Read operations accept these settings. For example, this On New File listener example identifies the ,
separator for a CSV input file:
<file:listener doc:name="On New File" config-ref="File_Config" outputMimeType='application/csv; separator=","'>
<scheduling-strategy >
<fixed-frequency frequency="45" timeUnit="SECONDS"/>
</scheduling-strategy>
<file:matcher filenamePattern="comma_separated.csv" />
</file:listener>
Note that the outputMimeType
setting above helps the CSV reader interpret the format and delimiter of the input comma_separated.csv
file, not the writer.
To specify the output format, you can provide the mime type and any writer properties for the writer, such as the CSV or JSON writer used by a File Write operation. For example, you might need to write a pipe (|
) delimiter in your CSV output payload, instead of some other delimiter used in the input. To do this, you append the property and its value to the output
directive of a DataWeave expression. For example, this Write operation specifies the pipe as a separator
:
<file:write doc:name="Write" config-ref="File_Config" path="my_transform">
<file:content ><![CDATA[#[output application/csv separator="|" --- payload]]]></file:content>
</file:write>
The sections below list the format-specific reader and writer properties available for each supported format.
Cobol Copybook
Mime Type: application/flatfile
A Cobol copybook is a type of flat file that describes the layout of records and fields in a Cobol data file.
The Transform component provides settings for handling the Cobol copybook format. For example, you can import a Cobol definition into the Transform component and use it for your Copybook transformations.
Importing a Copybook Definition
When you import a Copybook definition, the Transform component converts the definition to a flat file schema that you can reference with schemaPath
property.
To import a copybook definition:
-
Right-click the input payload in the Transform component in Studio, and select Set Metadata to open the Set Metadata Type dialog.
Note that you need to create a metadata type before you can import a copybook definition.
-
Provide a name for your copybook metadata, such as
copybook
. -
Select the Copybook type from the Type drop-down menu.
-
Import your copybook definition file.
-
Click Select.
Figure 1. Importing a Copybook Definition File
For example, assume that you have a copybook definition file (mailing-record.cpy
) that looks like this:
01 MAILING-RECORD. 05 COMPANY-NAME PIC X(30). 05 CONTACTS. 10 PRESIDENT. 15 LAST-NAME PIC X(15). 15 FIRST-NAME PIC X(8). 10 VP-MARKETING. 15 LAST-NAME PIC X(15). 15 FIRST-NAME PIC X(8). 10 ALTERNATE-CONTACT. 15 TITLE PIC X(10). 15 LAST-NAME PIC X(15). 15 FIRST-NAME PIC X(8). 05 ADDRESS PIC X(15). 05 CITY PIC X(15). 05 STATE PIC XX. 05 ZIP PIC 9(5).
Note: Copybook definitions must always begin with a 01
entry. A separate record type is generated for each 01
definition in your copybook (there must be at least one 01
definition for the copybook to be usable, so add one using an arbitrary name at the start of the copybook if none is present). If there are multiple 01
definitions in the copybook file, you can select which definition to use in the transform from the dropdown list.
Note: COBOL format requires definitions to only use columns 7-72 of each line. Data in columns 1-5 and past column 72 is ignored by the import process. Column 6 is a line continuation marker.
When you import the schema, the Transform component converts the copybook file to a flat file schema that it stores in the src/main/resources/schema
folder of your Mule project. In flat file format, the copybook definition above looks like this:
form: COPYBOOK id: 'MAILING-RECORD' values: - { name: 'COMPANY-NAME', type: String, length: 30 } - name: 'CONTACTS' values: - name: 'PRESIDENT' values: - { name: 'LAST-NAME', type: String, length: 15 } - { name: 'FIRST-NAME', type: String, length: 8 } - name: 'VP-MARKETING' values: - { name: 'LAST-NAME', type: String, length: 15 } - { name: 'FIRST-NAME', type: String, length: 8 } - name: 'ALTERNATE-CONTACT' values: - { name: 'TITLE', type: String, length: 10 } - { name: 'LAST-NAME', type: String, length: 15 } - { name: 'FIRST-NAME', type: String, length: 8 } - { name: 'ADDRESS', type: String, length: 15 } - { name: 'CITY', type: String, length: 15 } - { name: 'STATE', type: String, length: 2 } - { name: 'ZIP', type: Integer, length: 5, format: { justify: ZEROES, sign: UNSIGNED } }
After importing the copybook, you can use the schemaPath
property to reference the associated flat file through the output
directive. For example: output application/flatfile schemaPath="src/main/resources/schemas/mailing-record.ffd"
Supported Copybook Features
Not all copybook features are supported by the Cobol Copybook format in DataWeave. In general, the format supports most common usages and simple patterns, including:
-
USAGE of DISPLAY, BINARY (COMP), and PACKED-DECIMAL (COMP-3)
-
PICTURE clauses for numeric values consisting only of:
-
'9' - One or more numeric character positions
-
'S' - One optional sign character position, leading or trailing
-
'V' - One optional decimal point
-
'P' - One or more decimal scaling positions
-
-
PICTURE clauses for alphanumeric values consisting only of 'X' character positions
-
Repetition counts for '9', 'P', and 'X' characters in PICTURE clauses (as in
9(5)
for a 5-digit numeric value) -
OCCURS DEPENDING ON with controlVal property in schema. Note that if the control value is nested inside a containing structure, you need to manually modify the generated schema to specify the full path for the value in the form "container.value".
Unsupported Copybook Features
Unsupported copybook feature include the following:
-
Alphanumeric-edited PICTURE clauses
-
Numeric-edited PICTURE clauses with insertion or replacement
-
Special level-numbers:
-
Level 66 - Alternate name for field or group
-
Level 77 - Independent data item
-
Level 88 - Condition names (equivalent to an enumeration of values)
-
-
USAGE of COMP-1, COMP-2, or COMP-5
-
REDEFINES clause (used to provide different views of the same portion of record data)
-
VALUE clause (used to define a value of a data item or conditional name from a literal or another data item)
-
SYNC clause (used to align values within a record)
Common Copybook Import Issues
The most common issue with copybook imports is a failure to follow the Cobol standard for input line regions. The copybook import parsing ignores the contents of columns 1-6 of each line, and ignores all lines with an '*' (asterisk) in column 7. It also ignores everything beyond column 72 in each line. This means that all your actual data definitions need to be within columns 8 through 72 of input lines.
Tabs in the input are not expanded because there is no defined standard for tab positions. Each tab character is treated as a single space character when counting copybook input columns.
Indentation is ignored when processing the copybook, with only level-numbers treated as significant. This is not normally a problem, but it means that copybooks might be accepted for import even though they are not accepted by Cobol compilers.
Both warnings and errors might be reported as a result of a copybook import. Warnings generally tell of unsupported or unrecognized feature, which might or might not be significant. Errors are notifications of a problem that means the generated schema (if any) will not be a completely accurate representation of the copybook. You should review any warnings or errors reported and decide on the appropriate handling, which might be simply accepting the schema as generated, modifying the input copybook, or modifying the generated schema.
Reader Properties (for Cobol Copybook)
When defining an input of type Copybook, there are a few optional parameters you can add in the XML definition of your Mule project to customize how the data is parsed.
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
Location in your local disk of the schema file used to parse your input |
|
|
string |
In case the schema file defines multiple segments, this field selects which to use |
|
|
string |
nulls |
How missing values are represented in the input data: * |
|
string |
strict |
expected separation between lines/records:
|
|
boolean |
false |
Whether error should be thrown when a required value is missing (past the end of the record) |
|
boolean |
false |
Whether "strict" zoned decimal format should be used with non-EBCDIC encodings, as opposed to the same display characters as for EBCDIC |
|
boolean |
false |
Whether DEPENDING ON COBOL copybook values should be truncated to the length actually used, rather than always taking the maximum space |
Note that schemas with type Binary
or Packed
don’t allow for the detection of line breaks, so setting recordParsing
to lenient
only allow for long records to be handled, not short ones. These schemas only work with certain single-byte character encodings (so not with UTF-8 or any multibyte format).
Writer Properties (for Cobol Copybook)
When defining an output of type Copybook, there are a few optional parameters you can add to the DataWeave output directive to customize how the data is written:
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
Path where the schema file to be used is located |
|
|
string |
In case the schema file defines multiple formats, indicates which of them to use |
|
|
string |
UTF-8 |
Output character encoding |
|
string |
nulls |
How to represent optional values missing from the supplied map: * |
|
string |
Standard Java line termination for the system |
Termination for every line/record. In Mule runtime versions 4.0.4 and older, this is only used as a separator when there are multiple records. Possible values: |
|
boolean |
|
Trim string values longer than field length by truncating trailing characters |
|
boolean |
false |
Whether error should be thrown when a required value is missing (past the end of the record) |
|
boolean |
false |
Whether "strict" zoned decimal format should be used with non-EBCDIC encodings, as opposed to the same display characters as for EBCDIC |
|
boolean |
false |
Whether DEPENDING ON COBOL copybook values should be truncated to the length actually used, rather than always taking the maximum space |
output application/flatfile schemaPath="src/main/resources/schemas/QBReqRsp.esl", structureIdent="QBResponse"
CSV
Mime Type: application/csv
CSV content is modeled in DataWeave as a list of objects, where every record is an object and every field in it is a property. For example:
%dw 2.0
output application/csv
---
[
{
"Name":"Mariano",
"Last Name":"De achaval"
},
{
"Name":"Leandro",
"Last Name":"Shokida"
}
]
Name,Last Name
Mariano,De achaval
Leandro,Shokida
Reader Properties (for CSV)
In CSV, you can assign any special character as the indicator for separating fields, toggling quotes, or escaping quotes. Make sure you know what special characters are being used in your input so that DataWeave can interpret it correctly.
When defining an input of type CSV, there are a few optional parameters you can add in the XML definition of your Mule project to customize how the data is parsed.
Parameter | Type | Default | Description |
---|---|---|---|
|
char |
|
Character that separates one field from another field |
|
char |
|
Character that delimits the field values |
|
char |
|
Character used to escape occurrences of the separator or quote character within field values |
|
number |
|
The line number where the body starts. |
|
boolean |
|
defines if empty lines are ignored |
|
boolean |
|
Indicates if the first line of the output shall contain field names |
|
number |
|
the line number where the header is located |
|
boolean |
|
Used for streaming input CSV. (Use only if entries are accessed sequentially.) |
-
When
header=true
you can then access the fields within the input anywhere by name, for example:payload.userName
. -
When
header=false
you must access the fields by index, referencing first the entry and then the field, for example:payload[107][2]
Writer Properties (for CSV)
When defining an output of type CSV, there are a few optional parameters you can add to the output directive to customize how the data is parsed:
Parameter | Type | Default | Description |
---|---|---|---|
|
char |
, |
Character that separates one field from another field |
|
string |
The character set to be used for the output |
|
|
char |
" |
Character that delimits the field values |
|
char |
\ |
Character used to escape occurrences of the separator or quote character within field values |
|
string |
system line ending default |
line separator to be used. Example: "\r\n" |
|
boolean |
true |
Indicates if the first line of the output shall contain field names |
|
boolean |
false |
Indicates header values should be quoted |
|
boolean |
false |
Indicates if every value should be quoted whether or not it contains special characters within |
All of these parameters are optional. A CSV output directive might for example look like this:
output application/csv separator=";", header=false, quoteValues=true
DataWeave
Mime Type: application/dw
The DataWeave format is the canonical format for all transformations. Using it can helpful for understanding how input data is interpreted before it is transformed to a new format.
This example shows how XML input is expressed in the DataWeave format.
<employees>
<employee>
<firstname>Mariano</firstname>
<lastname>DeAchaval</lastname>
</employee>
<employee>
<firstname>Leandro</firstname>
<lastname>Shokida</lastname>
</employee>
</employees>
{
employees: {
employee: {
firstname: "Mariano",
lastname: "DeAchaval"
},
employee: {
firstname: "Leandro",
lastname: "Shokida"
}
}
} as Object {encoding: "UTF-8", mimeType: "text/xml"}
Excel
Mime Type: application/xlsx
Only .xlsx
files are supported (Excel 2007). .xls
files are not supported by Mule runtime.
An Excel workbook is a sequence of sheets. In DataWeave, this is mapped to an object where each sheet is a key. Only one table is allowed per Excel sheet. A table is expressed as an array of rows. A row is an object where its keys are the columns and the values the cell content.
output application/xlsx header=true
---
{
Sheet1: [
{
Id: 123,
Name: George
},
{
Id: 456,
Name: Lucas
}
]
}
Reader Properties (for Excel)
When defining an input of type Excel, there are a few optional parameters you can add in the XML definition of your Mule project to customize how the data is parsed.
Parameter | Type | Default | Description |
---|---|---|---|
|
boolean |
true |
defines if the Excel tables contain headers. When set to false, column names are used. (A, B, C, …) |
|
boolean |
true |
defines if empty lines are ignored |
|
string |
A1 |
The position of the first cell of the tables |
Writer Properties (for Excel)
When defining an output of type Excel, there are a few optional parameters you can add to the output directive to customize how the data is parsed:
Parameter | Type | Default | Description |
---|---|---|---|
|
boolean |
true |
defines if the Excel tables contain headers. When there are no headers, column names are used. (A, B, C, …) |
|
boolean |
true |
defines if empty lines are ignored |
|
string |
A1 |
The position of the first cell of the tables |
All of these parameters are optional. A DataWeave output directive might for Excel might look like this:
output application/xlsx header=true
Fixed Width
Mime Type: application/flatfile
Fixed width types are technically considered a type of Flat File format, but when selecting this option the Transform component offers you settings that are better tailored to the needs of this format.
Reader Properties (for Fixed Width)
When defining an input of type Fixed Width, there are a few optional parameters you can add in the XML definition of your Mule project to customize how the data is parsed.
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
Location in your local disk of the schema file used to parse your input. The Schema must have an |
|
|
string |
spaces |
How missing values are represented in the input data:
|
|
string |
strict |
expected separation between lines/records:
|
Writer Properties (for Fixed Width)
When defining an output of type fixed width there are a few optional parameters you can add to the output directive to customize how the data is written:
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
Path where the schema file to be used is located |
|
|
string |
UTF-8 |
Output character encoding |
|
string |
spaces |
How to represent optional values missing from the supplied map:
|
|
string |
standard Java line termination for the system |
Termination for every line/record. In Mule runtime versions 4.0.4 and older, this is only used as a separator when there are multiple records. Possible values: |
|
boolean |
|
Trim string values longer than field length by truncating trailing characters |
All of these parameters are optional. A DataWeave output directive might for Excel might look like this:
output application/flatfile schemaPath="src/main/resources/schemas/payment.ffd", encoding="UTF-8"
Flat File
Mime Type: application/flatfile
Reader Properties (for Flat File)
When defining an input of type Flat File, there are a few optional parameters you can add in the XML definition of your Mule project to customize how the data is parsed.
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
Location in your local disk of the schema file used to parse your input. The Schema must have an |
|
|
string |
The schema file might define multiple different structures, this field selects which to use. In case the schema only defines one, you also need to explicitly select that one through this field. |
|
|
string |
spaces |
How missing values are represented in the input data:
|
|
string |
strict |
expected separation between lines/records:
|
Note that schemas with type Binary
or Packed
don’t allow for line break detection, so setting recordParsing
to lenient
only allows long records to be handled, not short ones. These schemas also currently only work with certain single-byte character encodings (so not with UTF-8 or any multibyte format).
Writer Properties (for Flat File)
When defining an output of type flat file there are a few optional parameters you can add to the output directive to customize how the data is written:
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
Path where the schema file to be used is located |
|
|
string |
In case the schema file defines multiple formats, indicates which of them to use |
|
|
string |
UTF-8 |
Output character encoding |
|
string |
spaces |
How to represent optional values missing from the supplied map:
|
|
string |
standard Java line termination for the system |
Termination for every line/record. In Mule runtime versions 4.0.4 and older, this is only used as a separator when there are multiple records. Possible values: |
|
boolean |
|
Trim string values longer than field length by truncating trailing characters |
%dw 2.0
output application/flatfile schemaPath="src/main/resources/test-data/QBReqRsp.esl", structureIdent="QBResponse"
---
payload
Multipart (Form-Data)
Format: multipart/form-data
DataWeave supports multipart subtypes, in particular form-data
. These formats allow
handling several different data parts in a single payload, regardless of the format each
part has. To distinguish the beginning and end of a part, a boundary is used and metadata for
each part can be added through headers.
Below you can see a raw multipart/form-data
payload with a 34b21
boundary consisting of 3 parts:
-
a
text/plain
one namedtext
-
an
application/json
file (a.json
) namedfile1
-
a
text/html
file (a.html
) namedfile2
--34b21
Content-Disposition: form-data; name="text"
Content-Type: text/plain
Book
--34b21
Content-Disposition: form-data; name="file1"; filename="a.json"
Content-Type: application/json
{
"title": "Java 8 in Action",
"author": "Mario Fusco",
"year": 2014
}
--34b21
Content-Disposition: form-data; name="file2"; filename="a.html"
Content-Type: text/html
<!DOCTYPE html>
<title>
Available for download!
</title>
--34b21--
Within a DataWeave script, you can access and transform data from any of the parts by selecting the parts
element.
Navigation can be array based or key based when parts feature a name to reference them by.
The part’s data can be accessed through the content
keyword while headers can be accessed
through the headers
keyword.
The following script, for example, would produce Book:a.json
considering
the previous payload:
%dw 2.0
output text/plain
---
payload.parts.text.content ++ ':' ++ payload.parts[1].headers.'Content-Disposition'.filename
You can generate multipart content through DataWeave building an object with a list of parts each containing it’s headers and content. Below you can find a DataWeave script that produces the raw multipart data previously analyzed, considering the HTML data is available in the payload.
%dw 2.0
output multipart/form-data
boundary='34b21'
---
{
parts : {
text : {
headers : {
"Content-Type": "text/plain"
},
content : "Book"
},
file1 : {
headers : {
"Content-Disposition" : {
"name": "file1",
"filename": "a.json"
},
"Content-Type" : "application/json"
},
content : {
title: "Java 8 in Action",
author: "Mario Fusco",
year: 2014
}
},
file2 : {
headers : {
"Content-Disposition" : {
"filename": "a.html"
},
"Content-Type" : payload.^mimeType
},
content : payload
}
}
}
Notice that the key will determine the part’s name if not explicitly provided in
the Content-Disposition
header and that DataWeave can handle content from supported formats
as well as references to unsupported ones, as HTML.
Reader Properties (for Multipart)
You can set the boundary for the reader to use when it analyzes the data.
Parameter | Type | Default | Description |
---|---|---|---|
|
String |
A String to delimit parts. |
Note that in the DataWeave read
function, you can also pass the property as an optional parameter. The scope of the property is limited to the DataWeave script where you call the function.
Writer Properties (for Multipart)
The writer output form data using the DataWeave header directive:
output multipart/form-data
In the output directive, you can also set a property for the writer to use when it outputs the data in the specified format.
Parameter | Type | Default | Description |
---|---|---|---|
|
String |
Randomly autogenerated |
A String to delimit parts. |
For example, if a boundary is 34b21
, then you can pass this:
output multipart/form-data
boundary=34b21
Note that in the DataWeave write
function, you can also pass the property as an optional parameter. The scope of the property is limited to the DataWeave script where you call the function.
Multipart is typically, but not exclusively, used in HTTP where the boundary is
shared through the |
Java
Mime Type: application/java
This table shows the mapping between Java objects to DataWeave types.
Java Type | DataWeave Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Metadata property class
(for Java)
Java developers use the class
metadata key as hint for what class needs to be created and sent as an input. If this is not explicitly defined, DataWeave tries to infer from the context or it assigns it the default values:
-
java.util.HashMap
for objects -
java.util.ArrayList
for lists
%dw 2.0
type user = Object { class: "com.anypoint.df.pojo.User"}
output application/json
---
{
name : "Mariano",
age : 31
} as user
The code above defines the type of the required input as an instance of com.anypoint.df.pojo.User
.
Enum Custom Type (for Java)
In order to put an enum value in a java.util.Map
, the DataWeave Java module defines a custom type called Enum
. It allows you to specify that a given string should be handled as the name of a specified enum type. It should always be used with the class property with the java class name of the enum.
JSON
Mime Type: application/json
JSON data structures are mapped to DataWeave data structures because they share a lot of similarities.
Writer Properties (for JSON)
When defining an output of type JSON, there are a few optional parameters you can add to the output directive to customize how the data is parsed:
Parameter | Type | Default | Description |
---|---|---|---|
|
boolean |
true |
Defines if the JSON code will be indented for better readability, or if it will be compressed into a single line |
|
string |
UTF-8 |
The character set to be used for the output |
|
number |
153600 |
The size of the buffer writer |
|
string |
When the writer should use inline close tag. Possible values = empty/none |
|
|
string |
Possible values = |
|
|
boolean |
false |
JSON language doesn’t allow duplicate keys with one same parent, this usually raises an exception. If set to true, the output contains a single key that points to an array containing all the values assigned to it. |
output application/json indent=false, skipNullOn="everywhere"
Skip Null On (for JSON)
You can specify whether this generates an outbound message that contains fields with "null" values, or if these fields are ignored entirely. This can be set through an attribute in the output directive named skipNullOn
, which can be set to three different values: objects
, arrays
, or everywhere
.
When set to:
-
objects
: A key:value pair with a null value is ignored. -
arrays
: Null values in arrays are skipped. -
everywhere
: Apply this rule to both elements and attributes.
XML
Mime Type: application/xml
The XML data structure is mapped to DataWeave objects that can contain other objects as values to their keys. Repeated keys are supported. Example:
<users>
<company>MuleSoft</company>
<user name="Leandro" lastName="Shokida"/>
<user name="Mariano" lastName="Achaval"/>
</users>
{
users: {
company: "MuleSoft",
user @(name: "Leandro",lastName: "Shokida"): "",
user @(name: "Mariano",lastName: "Achaval"): ""
}
}
Reader Properties (for XML)
When defining an input of type XML, there are a few optional parameters you can add in the XML definition of your Mule project to customize how the data is parsed.
Parameter | Type | Default | Description |
---|---|---|---|
|
string |
|
If a tag with empty or blank text should be read as null. |
|
boolean |
|
Picks which reader modality to use. The indexed reader is faster but uses a greater amount of memory, whilst the unindexed reader is slower but uses less memory |
|
integer |
|
Limits the number of times that an entity can be referenced within the XML code. This is included to guard against denial of service attacks. |
|
boolean |
|
Defines if references to entities that are defined in a file outside the XML are accepted as valid. It’s recommended to avoid these for security reasons as well. |
|
|
|
Enable or disable DTD support. Disabling skips (and does not process) internal and external subsets. Valid Options are |
Writer Properties (for XML)
When defining an output of type XML, there are a few optional parameters you can add to the output directive to customize how the data is parsed:
Parameter | Type | Default | Description |
---|---|---|---|
|
boolean |
true |
Defines if the XML code will be indented for better readability, or if it will be compressed into a single line |
|
string |
|
Defines whether an empty XML child element appears as single self-closing tag or with an opening and closing tag. The value |
|
string |
UTF-8 |
The character set to be used for the output |
|
number |
153600 |
The size of the buffer writer |
|
string |
When the writer should use inline close tag. Possible values = |
|
|
string |
Possible values = |
|
|
boolean |
true |
Defines if the XML declaration will be included in the first line |
output application/xml indent=false, skipNullOn="attributes"
The inlineCloseOn
parameter defines whether the output is structured like this (the default):
<someXml>
<parentElement>
<emptyElement1></emptyElement1>
<emptyElement2></emptyElement2>
<emptyElement3></emptyElement3>
</parentElement>
</someXml>
It can also be structured like this (set with a value of empty
):
<payload>
<someXml>
<parentElement>
<emptyElement1/>
<emptyElement2/>
<emptyElement3/>
</parentElement>
</someXml>
</payload>
See also, Example: Outputting Self-closing XML Tags.
Skip Null On (for XML)
You can specify whether your transform generates an outbound message that contains fields with null
values, or if these fields are ignored entirely. This can be set through an attribute in the output directive named skipNullOn
, which can be set to three different values: elements
, attributes
, or everywhere
.
When set to:
-
elements
: A key:value pair with a null value is ignored. -
attributes
: An XML attribute with a null value is skipped. -
everywhere
: Apply this rule to both elements and attributes.
Defining a Metadata Type (for XML)
In the Transform component, you can define a XML type through the following methods:
-
By providing a sample file
-
By pointing to a schema file
CData Custom Type (for XML)
Mime Type: application/xml
CData
is a custom data type for XML that is used to identify a CDATA XML block. It can tell the writer to wrap the content inside CDATA or to check if the input string arrives inside a CDATA block. CData
inherits from the type String
.
%dw 2.0
output application/xml
---
{
users:
{
user : "Mariano" as CData,
age : 31 as CData
}
}
<?xml version="1.0" encoding="UTF-8"?>
<users>
<user><![CDATA[Mariano]]></user>
<age><![CDATA[31]]></age>
</users>
URL Encoding
Mime Type: application/x-www-form-urlencoded
A URL encoded string is mapped to a DataWeave object:
-
You can read the values by their keys using the dot or star selector.
-
You can write the payloads by providing a DataWeave object.
Here is an example of x-www-form-urlencoded
data:
key=value&key+1=%40here&key=other+value&key+2%25
The following DataWeave script produces the data above:
output application/x-www-form-urlencoded
---
{
"key" : "value",
"key 1": "@here",
"key" : "other value",
"key 2%": null
}
You can read in the data above as input to the DataWeave script in the next example to return value@here
as the result.
output text/plain
---
payload.*key[0] ++ payload.'key 1'
Note that there are no reader properties for URL encoded data.
Writer (for URL Encoded Data)
Here is the DataWeave output directive for writing form data:
output application/x-www-form-urlencoded
In the output directive, you can also set a property for the writer to use when it outputs the data in the specified format.
Parameter | Default | Description |
---|---|---|
|
UTF-8 |
Specifies the encoding to use. |
|
192 kb |
Specifies a number of bytes to use for the buffer. |
output application/x-www-form-urlencoded encoding="UTF-8", bufferSize="500"
Note that in the DataWeave write
function, you can also pass the property as an optional parameter. The scope of the property is limited to the DataWeave script where you call the function.