If your data can be fit into the CPU memory, it is a good practice to save your data into one pickle file (or other format that you know how to deserialize). This comes with several advantages. First, it is easier and faster to read from a single big file rather than many small files. Second, it avoids the possible system error of openning too many files (even though avoiding lazying data loading is another way to fix the issue). Some example datasets (e.g., MNIST) have separate training and testing files (i.e., 2 pickle files), so that research work based on it can be easily reproduced. I personally suggest that you keep only 1 file containing all data when implementing your own Dataset class. You can always use the function
torch.utils.data.random_split
Split a Dataset into Train and Test Datasets in Python
Scikit-learn Compatible Packages¶
sklearn.model_selection.train_test_split
is the best way to split a dataset into train and test subset
for scikit-learn compatible packages (scikit-learn, XGBoost, LightGBM, etc.).
It supports splitting both iterable objects (numpy array, list, pandas Series) and pandas DataFrames.
When splitting an iterable object,
it returns (train, test)
where train
and test
are lists.
When splitting a pandas DataFrame,
it returns (train, test)