Ben Chuanlong Du's Blog

It is never too late to learn.

Monitoring and Alerting Tools

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

Grafana seems like another great choice!!! https://grafana.com/grafana/download?platform=docker

Prometheus sounds like a good one!

ELKI sounds like a possible tool for monitoring & alerting

Argus another one …

Device Managment in PyTorch

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

  1. Modules can hold parameters of different types on different devices, so it's not always possible to unambiguously determine the device. The recommended workflow in PyTorch is to create the device object …

Time Series Analysis

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

In statistics, a unit root test tests whether a time series variable is non-stationary using an autoregressive model. A well-known test that is valid in large samples is the augmented Dickey …

Experiment Design

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

  1. Complete Randomized Design (CRD)

  2. Randomized Complete Block Design (CBD)

    • same RNE as CRD
  3. Latin Square Design (LSD)
    • same RNE as CRD
  4. Balanced Incomplete Block Design

    • all treatments cannot fit in any …

Sampling Methods

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

Probability Sampling

  • Random Sampling

  • Systematic Sampling

  • Stratified Sampling

Non-probability Sampling

  • Convenience Sampling

  • Judgement Sampling

  • Quota Sampling

  • Snowball Sampling

bias

Tips on Dataset in PyTorch

  1. If your data can be fit into the CPU memory, it is a good practice to save your data into one pickle file (or other format that you know how to deserialize). This comes with several advantages. First, it is easier and faster to read from a single big file rather than many small files. Second, it avoids the possible system error of openning too many files (even though avoiding lazying data loading is another way to fix the issue). Some example datasets (e.g., MNIST) have separate training and testing files (i.e., 2 pickle files), so that research work based on it can be easily reproduced. I personally suggest that you keep only 1 file containing all data when implementing your own Dataset class. You can always use the function torch.utils.data.random_split