This commit is contained in:
John 2022-10-04 22:37:28 +02:00
parent 46cfbd9a94
commit bddac2b8ce
10 changed files with 64 additions and 16 deletions

View File

@ -28,23 +28,25 @@ find /usr/lib/modules -name nvidia.ko
nvidia-settings
## run
```bash
nvidia-smi nvidia-smi -L
nvidia-smi -l n # run every n seconds
```
## monitoring nvidia
https://github.com/fbcotter/py3nvml
<https://github.com/fbcotter/py3nvml>
---
## successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero => error; Modify in host and set the -1 to 0
/sys/bus/pci/devices/0000:2b:00.0/numa_node
for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done
https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ
<https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ>
---
@ -52,19 +54,25 @@ https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-succe
```bash
sudo crontab -e
sudo VISUAL=vi crontab -e
# Add the following line
@reboot (echo 0 | tee -a "/sys/bus/pci/devices/<PCI_ID>/numa_node")
```
[Source](https://askubuntu.com/questions/1379119/how-to-set-the-numa-node-for-an-nvidia-gpu-persistently)
---
## start docker with --gpus=all every time, otherwise error
### failed call to cuInit: UNKNOWN ERROR (-1
### no NVIDIA GPU device is present: /dev/nvidia0 does not exist
docker run -it -p 8888:8888 --gpus=all tensorflow/tensorflow:latest-gpu-jupyter
---
## update nvidea drivers
ubuntu-drivers autoinstall
ubuntu-drivers autoinstall

Binary file not shown.

View File

@ -33,15 +33,16 @@ path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate, pretrained=True)
learn.fine_tune(epochs=1)
learn.show_results(max_n=6, figsize=(7,8))
```
- **Image**DataLoaders: First part of name indicate type of data eq image or text
- **Tabular**DataLoaders, **Text**DataLoaders or **Segmentation**DataLoaders
- **Tabular**DataLoaders, **Text**DataLoaders or **Segmentation**DataLoaders
- **valid_pct**=0.2 size **validation set** (==developement set) used to **measure the accuracy of the model**. Remaining is the **training set**
- **seed**=42 random seed to thesame value every time we run this code, which means we **get the same validation set** every time we run it.
@ -54,26 +55,28 @@ Hiring third party, then use a good test set for mitigating risks
*Many methods to avoid overfitting, but use them only when confirmed that overfitting is occurring.*
- learn = **cnn_learner**(dls, resnet34, metrics=error_rate): convolutional neural network (CNN)
- dls: the data
- resnet34: 34 in resnet34 refers to the number of layers in the resnet
Models using architectures with more layers take longer to train and are more prone to overfitting, when using more data, they can be quite a bit more accurate.
- metric: measures the quality of the models predictions using the validation set, and will be printed at the end of each epoch.
- dls: the data
- resnet34: 34 in resnet34 refers to the number of layers in the resnet
Models using architectures with more layers take longer to train and are more prone to overfitting, when using more data, they can be quite a bit more accurate.
- metric: measures the quality of the models predictions using the validation set, and will be printed at the end of each epoch.
![7e3dd277a7935550cabb842d0b3e91ef.png](../../_resources/7e3dd277a7935550cabb842d0b3e91ef.png)
- error_rate: what percentage of images in the validation set are being classified incorrectly
- accuracy = 1.0 - error_rate
- parameter **pretrained** defaults to True (*transfer learning*). When using a pretrained model, cnn_learner will remove the last layer and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. This last part of the model is known as the **head**.
- error_rate: what percentage of images in the validation set are being classified incorrectly
- accuracy = 1.0 - error_rate
- parameter **pretrained** defaults to True (*transfer learning*). When using a pretrained model, cnn_learner will remove the last layer and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. This last part of the model is known as the **head**.
- learn.**fine_tune**(epochs=1)
- adapt a pretrained model for a new dataset
- **fit** method will overwrite all weights of the pretrained model
- adapt a pretrained model for a new dataset
- **fit** method will overwrite all weights of the pretrained model
- learn.**show_results**(max_n=6, figsize=(7,8))
Computer vision datasets are normally structured in such a way that the **label** for an image is part of the filename or path - most commonly the parent folder name.
A **Transform** contains code that is applied automatically during training:
- **item_tfms**: applied to each item eq item is resized to a 224-pixel square
- **batch_tfms**: applied to a batch of items at a time using the GPU
## upload a picture
```python
from fastbook import * # requiered for widgets
uploader = widgets.FileUpload()
@ -81,6 +84,7 @@ uploader
```
## test
```python
img = PILImage.create(uploader.data[0])
is_cat,_,probs = learn.predict(img)

View File

@ -0,0 +1,36 @@
# Time series
> A time series is a set of data points ordered in time.
The data is equally spaced in time, meaning that it was recorded at every hour, minute, month, or quarter. Time series are indexed by time, and that order must be kept
Decomposition is defined as a statistical task that separates a time series into its different components:
![Trend](../_resources/Trend.png)
The **trend** is defined
as the slow-moving changes in a time series.
If you observe seasonality, then a **SARIMA model** would be relevant, because this
model uses seasonal effects to produce forecasts
![Seasonality](../_resources/Seasonallity.png)
The **seasonal** component captures the seasonal variation, which is a cycle that occurs over a fixed period of time.
**Residuals**, which is what cannot be
explained by either the trend or the seasonal components. Residuals usually correspond to random errors, also termed **white noise**
![Seasonal and Cyclical](../_resources/Seasonal_and_cyclical.png)
![Seasonal and Trend](../_resources/Seasonallity_trend.png)
![Variation](../_resources/Variation.png)
![Birds-eye view of time series forecasting](../_resources/ForcastingProjectRoadmap.png)
> The only way to know that a model is good, or performant, is to compare it to a baseline. The baseline model is the simplest solution you can
think of—it should not require any training, and the cost of implementation should be very low.
A **baseline model** is a trivial solution to your forecasting problem. It relies on heuristics or simple statistics and is usually the simplest solution. It does not require
model fitting, and it is easy to implement.
For example:
- arithmetic mean
- if we see a cyclical pattern in our data, we can simply repeat that pattern into the future.

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

BIN
_resources/Seasonallity.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

BIN
_resources/Trend.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

BIN
_resources/Variation.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB