loadTextEx

Syntax

loadTextEx(dbHandle, tableName, partitionColumns, filename, [delimiter], [schema], [skipRows=0], [transform], [sortColumns], [atomic=false], [arrayDelimiter], [containHeader])

Arguments

dbHandle the distributed database where the imported data will be saved. The database can be either in the distributed file system or an in-memory database.

tableName a string indicating the name of the table with the imported data.

partitionColumns a string scalar/vector indicating partitioning column(s). For sequential partition, partitionColumns is “” as it doesn’t need a partitioning column. For composite partition, partitionColumns is a string vector.

filename the input text file name with its absolute path.

delimiter a STRING scalar indicating the table column separator. It can consist of one or more characters, with the default being a comma (‘,’).

schema a table. It can have the following columns, among which “name” and “type” columns are required.

column name

data type

meaning

name

STRING scalar

column name

type

STRING scalar

data type

format

STRING scalar

the format of temporal columns

col

INT scalar or vector

the columns to be loaded

skipRows is an integer between 0 and 1024 indicating the rows in the beginning of the text file to be ignored. The default value is 0.

transform is a unary function. The parameter of the function must be a table.

sortColumns is a string scalar/vector indicating the columns based on which the table is sorted.

atomic is a Boolean value indicating whether to guarantee atomicity when loading a file with the cache engine enabled. If it is set to true, the entire loading process of a file is a transaction; set to false to split the loading process into multiple transactions.

Note: It is required to set atomic = false if the file to be loaded exceeds the cache engine capacity. Otherwise, a transaction may get stuck: it can neither be committed nor rolled back.

arrayDelimiter is a single character indicating the delimiter for columns holding the array vectors in the file. Since the array vectors cannot be recognized automatically, you must use the schema parameter to update the data type of the type column with the corresponding array vector data type before import.

containHeader a Boolean value indicating whether the file contains a header row. The default value is NULL. See loadText for the detailed determining rules.

Details

Load a text file into DolphinDB database.

If dbHandle is specified and is not empty string “”: load a text file to a distributed database. The result is a table object with metadata of the table.

If dbHandle is empty string “” or unspecified: load a text file as a partitioned in-memory table. For this usage, when we define dbHandle with function database, the parameter directory must also be the empty string “” or unspecified.

If parameter transform is specified, we need to first execute createPartitionedTable and then load the data. The system will apply the function specified in parameter transform and then save the results into the database.

Function loadTextEx and function loadText share many common features, such as whether the first row is treated as the header, how the data types of columns are determined, how the system adjusts illegal column names, etc. For more details, please refer to function loadText.

Examples

Use the following script to generate the text file to be used:

$ n=10000
$ ID=rand(100, n)
$ dates=2017.08.07..2017.08.11
$ date=rand(dates, n)
$ vol=rand(1..10 join int(), n)
$ t=table(ID, date, vol)
$ saveText(t, "C:/DolphinDB/Data/t.txt");

Ex 1. Load t.txt into a DFS database with a range domain on ID.

$ db = database(directory="dfs://rangedb", partitionType=RANGE, partitionScheme=0 51 101)
$ pt=loadTextEx(dbHandle=db,tableName=`pt, partitionColumns=`ID, filename="/home/DolphinDB/Data/t.txt");

For a TSDB engine:

$ db = database(directory="dfs://rangedb", partitionType=RANGE, partitionScheme=0 51 101, engine='TSDB')
$ pt=loadTextEx(dbHandle=db, tableName=`pt, partitionColumns=`ID, filename="/home/DolphinDB/Data/t.txt", sortColumns=`ID`date);

To load table pt from the database:

$ db = database("dfs://rangedb")
$ pt = loadTable(db, `pt);

Ex 2. Load t.txt into a DFS database with a composite domain.

$ dbDate = database(directory="", partitionType=VALUE, partitionScheme=2017.08.07..2017.08.11)
$ dbID=database(directory="", partitionType=RANGE, partitionScheme=0 51 101)
$ db = database(directory="dfs://compoDB", partitionType=COMPO, partitionScheme=[dbDate, dbID])
$ pt = loadTextEx(dbHandle=db,tableName=`pt, partitionColumns=`date`ID, filename="/home/DolphinDB/Data/t.txt");

Ex 3. Load t.txt into a partitioned in-memory table with value domain on date.

$ db = database(directory="", partitionType=VALUE, partitionScheme=2017.08.07..2017.08.11)
$ pt = db.loadTextEx(tableName="", partitionColumns=`date, filename="/home/DolphinDB/Data/t.txt");

$ pt.sortBy!(`ID`x);

Ex 4. Convert all NULL values to 0 and then load the data into a dfs database with a composite domain.

$ dbDate = database(directory="", partitionType=VALUE, partitionScheme=2017.08.07..2017.08.11)
$ dbID=database(directory="", partitionType=RANGE, partitionScheme=0 51 101)
$ db = database(directory="dfs://compoDB", partitionType=COMPO, partitionScheme=[dbDate, dbID]);

$ pt=db.createPartitionedTable(table=t, tableName=`pt, partitionColumns=`date`ID)
$ pt=loadTextEx(dbHandle=db, tableName=`pt, partitionColumns=`date`ID, filename="/home/DolphinDB/Data/t.txt", transform=nullFill!{,0});

Ex 5. Load array vectors

Create a csv file:

$ bid = array(DOUBLE[], 0, 20).append!([1.4799 1.479 1.4787, 1.4796 1.479 1.4784, 1.4791 1.479 1.4784])
$ ask = array(DOUBLE[], 0, 20).append!([1.4821 1.4825 1.4828, 1.4818 1.482 1.4821, 1.4814 1.4818 1.482])
$ TradeDate = 2022.01.01 + 1..3
$ SecurityID = rand(`APPL`AMZN`IBM, 3)
$ t = table(SecurityID as `sid, TradeDate as `date, bid as `bid, ask as `ask)
$ t;
$ saveText(t,filename="/home/DolphinDB/Data/t.csv",delimiter=',',append=true)

Load the file with loadTextEx

$ db = database(directory="dfs://testDB", partitionType=VALUE, partitionScheme=`APPL`AMZN`IBM, engine='TSDB')
$ path = "/home/DolphinDB/Data/t.csv"
$ schema = extractTextSchema(path);
$ update schema set type = "DOUBLE[]" where name="bid" or name ="ask"
$ loadTextEx(dbHandle=db, tableName=`t, partitionColumns=`sid, filename=path, schema=schema, arrayDelimiter=",", sortColumns=`sid`date);
$ select * from t;

sid

date

bid

ask

AMZN

2022.01.04

[1.479100,1.479000,1.478400]

[1.481400,1.481800,1.482000]

AMZN

2022.01.03

[1.479600,1.479000,1.478400]

[1.481800,1.482000,1.482100]

AMZN

2022.01.04

[1.479100,1.479000,1.478400]

[1.481400,1.481800,1.482000]

APPL

2022.01.02

[1.479900,1.479000,1.478700]

[1.482100,1.482500,1.482800]

APPL

2022.01.03

[1.479600,1.479000,1.478400]

[1.481800,1.482000,1.482100]

APPL

2022.01.04

[1.479100,1.479000,1.478400]

[1.481400,1.481800,1.482000]

IBM

2022.01.02

[1.479900,1.479000,1.478700]

[1.482100,1.482500,1.482800]

IBM

2022.01.03

[1.479600,1.479000,1.478400]

[1.481800,1.482000,1.482100]

IBM

2022.01.02

[1.479900,1.479000,1.478700]

[1.482100,1.482500,1.482800]