CSV (.csv)
Background
-
- MIME type: text/comma-separated-values, text/csv
- CSV tabular data format.
- Stores records of numerical and textual information as lines, using commas to separate fields.
- Commonly used in spreadsheet applications as an exchange format.
- CSV is an acronym for Comma-Separated Values.
- Plain text format.
- Similar to TSV.
- Supports RFC 4180.
Import & Export
- Import ["file.csv"] returns a list of lists containing strings and numbers, representing the rows and columns stored in the file.
- Import ["file.csv",elem] imports the specified element.
- Import ["file.csv",{elem,subelem1,…}] imports subelements subelemi, useful for partial data import.
- The import format can be specified with Import ["file","CSV"] or Import ["file",{"CSV",elem,…}].
- Export ["file.csv",expr] creates a CSV file from expr.
- Supported expressions expr include:
-
{v1,v2,…} a single column of data{{v11,v12,…},{v21,v22,…},…} lists of rows of dataarray an array such as SparseArray , QuantityArray , etc.tseries a TimeSeries , EventSeries or a TemporalData object
- See the following reference pages for full general information:
-
CloudImport , CloudExport import from or export to a cloud objectImportString , ExportString import from or export to a stringImportByteArray , ExportByteArray import from or export to a byte array
Import Elements
- General Import elements:
-
"Elements" list of elements and options available in this file"Summary" summary of the file"Rules" list of rules for all available elements
- Data representation elements:
-
"Data" two-dimensional array"Grid" table data as a Grid object"RawData" two-dimensional array of strings"Dataset" table data as a Dataset"Tabular" table data as a Tabular object
- Data descriptor elements:
-
"ColumnLabels" names of columns"ColumnTypes" association of column names and types"Schema" TabularSchema object
- Import and Export use the "Data" element by default.
- Subelements for partial data import for any element elem can take row and column specifications in the form {elem,rows,cols}, where rows and cols can be any of the following:
-
n nth row or column-n counts from the endn;;m from n through mn;;m;;s from n through m with steps of s{n1,n2,…} specific rows or columns ni
- Column specifications for files with headers can also be any of the following:
-
"col" single column "col"{col1,col2,…} list of column names coli
- Metadata elements:
-
"ColumnCount" number of columns"Dimensions" a list of number of rows and maximum number of columns"RowCount" number of rows
Examples
open all close allBasic Examples (3)
Import a CSV file:
Read and plot all data from the file:
Import summary of a CSV file:
Export an array of expressions to CSV:
Scope (8)
Import (4)
Import metadata from a CSV file:
Import a CSV file as a Tabular object with automatic header detection:
Import without headers, while skipping the first line:
Import a sample row of a CSV:
Analyze a single column of a file; start by looking at column labels and their types:
Get all values for one column:
Compute the mean:
Export (4)
Export a Tabular object:
Use "TableHeadings" option to remove header from a Tabular object:
Export a TimeSeries :
Export an EventSeries :
Export a QuantityArray :
Import Elements (30)
"ColumnCount" (1)
Get the number of columns from a CSV file:
"ColumnLabels" (1)
Get the inferred column labels from a CSV file:
"ColumnTypes" (1)
Get the inferred column types from a CSV file:
"Data" (7)
Import a CSV file as a 2D list of values:
This is also the default element:
Import a single row from a CSV file:
Import some specific rows from a CSV file:
Import the first 10 rows of a CSV file:
Import a single row and column from a CSV file:
Import a single column from a CSV file:
Import selected columns using column names:
"Dataset" (3)
Import a CSV file as a Dataset :
Use "HeaderLines" and "SkipLines" options to only import the data of interest:
Import selected columns using column names:
"Dimensions" (2)
Get the dimensions from a CSV file:
If all rows in the file do not have the same number of columns, some rows may be considered as invalid:
"Grid" (1)
Import CSV data as a Grid :
"RawData" (3)
"RowCount" (1)
Get the number of rows from a CSV file:
"Schema" (1)
Get the TabularSchema object:
"Summary" (1)
Summary of a CSV file:
"Tabular" (8)
Import a CSV file as a Tabular object:
Use "HeaderLines" and "SkipLines" options to only import the data of interest:
Import a single row:
Import multiple rows:
Import the first 5 rows:
Import a single element at a given row and column:
Import a single column:
Import selected columns using column names:
Import Options (15)
CharacterEncoding (1)
The character encoding can be set to any value from $CharacterEncodings :
"ColumnTypeDetectionDepth" (1)
By default, several dozen rows from the beginning of the file are used to detect column types:
Use more rows to detect column types:
"CurrencyTokens" (1)
Currency tokens are not automatically skipped:
Use the "CurrencyTokens" option to skip selected currency tokens:
"DateStringFormat" (1)
Convert dates to a DateObject using the date format specified:
By default, no conversion is happening:
"EmptyField" (1)
Specify a default value for empty fields in CSV data:
"FieldSeparator" (1)
By default, "," is used as a field separator:
Use tab as a field separator:
"FillRows" (1)
For the "Data" element, row lengths are automatically preserved:
Pad rows:
For the "RawData" element, a full array is imported by default:
"HeaderLines" (1)
The header line is automatically detected by default:
Use "HeaderLines" option when automatic header detection is incorrect:
Specify row headers:
Specify row and column headers:
"IgnoreEmptyLines" (1)
Use "IgnoreEmptyLines" to remove lines with no data from the imported data:
MissingValuePattern (1)
By default, an automatic set of values is considered missing:
Use MissingValuePattern None to disable missing element detection:
Use string patterns to find missing elements:
"Numeric" (1)
Use "Numeric"->True to interpret numbers:
By default, everything imports as strings:
"NumberPoint" (1)
By default, "." is used to specify decimal point character for floating-point data:
Use "NumberPoint" option to specify decimal point character for floating-point data:
"QuotingCharacter" (1)
The default quoting character is a double quote:
A different quoting character can be specified:
"Schema" (1)
Import automatically infers column labels and types from data stored in a CSV file:
Use "Schema" option to specify column labels and types:
"SkipLines" (1)
CSV files may include a comment line:
Skip the comment line:
Skip the comment line and use the next line as a Tabular header:
Export Options (8)
Alignment (1)
By default, no additional characters are added for any alignment:
Left-align column values:
Center-align column values:
CharacterEncoding (1)
The character encoding can be set to any value from $CharacterEncodings :
"EmptyField" (1)
By default, empty elements are exported as empty strings:
Specify a different value for empty elements:
"ExpressionFormattingFunction" (1)
"FillRows" (1)
Row lengths are preserved by default:
Use "FillRows"->True to export full array:
"IncludeQuotingCharacter" (1)
"QuotingCharacter" (1)
The default quoting character used for non-numeric elements is a double quote:
Specify a different quoting character:
Use "QuotingCharacter"->"" to export all values without quotes. Note that headers are always enclosed in quotes:
"TableHeadings" (1)
By default, column headers are exported:
Use "TableHeadings"None to skip column headers:
Export data using custom column headers:
Export data using custom column and row headers:
Applications (1)
Export a list of European countries and their populations to a CSV file:
Import the data back and convert to expressions:
Possible Issues (14)
For ragged arrays, where rows have different numbers of columns, some rows may be considered as invalid:
Use "Backend""Table" to avoid skipping those rows:
Entries of the format "nnnEnnn" are interpreted as numbers with scientific notation:
Use the "Numeric" option to override this interpretation:
Numeric interpretation may result in a loss of precision:
Use the "Numeric" option to override this interpretation:
Starting from Version 14.3, some expressions are converted to strings using -Head- format:
Use "ExpressionFormattingFunction"->InputForm to get the previous result:
Use "Backend"->"Table" to get the previous result:
Starting from Version 14.2, currency tokens are not automatically skipped:
Use the "CurrencyTokens" option to skip such tokens:
Starting from Version 14.2, quoting characters are added when the column of integer values contains numbers greater than Developer`$MaxMachineInteger:
Use "IncludeQuotingCharacter"->None to get the previous result:
Starting from Version 14.2, some strings are automatically considered missing:
Use MissingValuePattern None to override this interpretation:
Starting from Version 14.2, real numbers with 0 fractional part are exported as integers:
Use "Backend"->"Table" to get the previous result:
Starting in Version 14.2, there is an automatic column type identification:
Use "Backend""Table" if nonhomogeneous types in columns are expected:
Starting from Version 14.2, integers greater than Developer`$MaxMachineInteger are imported as real numbers:
Use "Backend"->"Table" to get the previous result:
Starting from Version 14.2, date and time columns of Tabular objects are exported using DateString :
Use "Backend"->"Table" to get the previous result:
Some CSV data generated from older versions of the Wolfram Language may have incorrectly delimited text fields and will not import as expected in Version 11.2 or higher:
Using "QuotingCharacter""" will give the previously expected result:
The top-left corner of data is lost when importing a Dataset with row and column headers:
Dataset may look different depending on the dimensions of the data:
See Also
Import Export CloudExport CloudImport
Formats: TSV Parquet ArrowIPC ArrowDataset ORC Table XLS XLSX
History
Introduced in 1999 (4.0) | Updated in 2019 (12.0) ▪ 2025 (14.2) ▪ 2025 (14.3) ▪ 2025 (15.0)