Transformers on github : https://github.com/huggingface/transformers
The documentation for `run_squad.py` can be found here : https://huggingface.co/transformers/examples.html#squad
## Configure environment
...
...
@@ -39,13 +40,17 @@ Run the network training
```bash
qsub <pbs_script>.pbs
```
> Some temporary data is written in directory `--output_dir` (`./debug_squad/`). You may have to clean the directory manually before relaunching the training `rm -r ./debug_squad/`
Two training examples :
Two training examples are provided :
-`single_gpu_training.pbs` : train the network on a single GPUs
-`dual_gpu_training.pbs` : train the network on a two GPUs
Notes :
- Some temporary data is written in directory `--output_dir` (`./debug_squad/`). You may have to clean the directory manually before relaunching the training `rm -r ./debug_squad/`
- During the TP sessions, you can use the reservation `isiaq` instead of the `gpuq` by commenting/decommenting lines beginning with `#PBS -q`)