diff --git a/AI/nvidea/nvidea.md b/AI/nvidea/nvidea.md index afcc10b..dca8e2b 100644 --- a/AI/nvidea/nvidea.md +++ b/AI/nvidea/nvidea.md @@ -28,23 +28,25 @@ find /usr/lib/modules -name nvidia.ko nvidia-settings ## run + ```bash nvidia-smi nvidia-smi -L nvidia-smi -l n # run every n seconds ``` - ## monitoring nvidia -https://github.com/fbcotter/py3nvml + + --- ## successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero => error; Modify in host and set the -1 to 0 + /sys/bus/pci/devices/0000:2b:00.0/numa_node for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done -https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ + --- @@ -52,19 +54,25 @@ https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-succe ```bash sudo crontab -e +sudo VISUAL=vi crontab -e # Add the following line @reboot (echo 0 | tee -a "/sys/bus/pci/devices//numa_node") ``` + [Source](https://askubuntu.com/questions/1379119/how-to-set-the-numa-node-for-an-nvidia-gpu-persistently) --- ## start docker with --gpus=all every time, otherwise error + ### failed call to cuInit: UNKNOWN ERROR (-1 + ### no NVIDIA GPU device is present: /dev/nvidia0 does not exist + docker run -it -p 8888:8888 --gpus=all tensorflow/tensorflow:latest-gpu-jupyter --- ## update nvidea drivers -ubuntu-drivers autoinstall \ No newline at end of file + +ubuntu-drivers autoinstall diff --git a/Boek/apache-hive-essentials-2nd.pdf b/Boek/apache-hive-essentials-2nd.pdf new file mode 100644 index 0000000..6555841 Binary files /dev/null and b/Boek/apache-hive-essentials-2nd.pdf differ diff --git a/Python/Pytorch/Summary Book Deep Learning for Coders with Fastai .md b/Python/Pytorch/Summary Book Deep Learning for Coders with Fastai .md index 5264c19..d392ec7 100644 --- a/Python/Pytorch/Summary Book Deep Learning for Coders with Fastai .md +++ b/Python/Pytorch/Summary Book Deep Learning for Coders with Fastai .md @@ -33,15 +33,16 @@ path = untar_data(URLs.PETS)/'images' def is_cat(x): return x[0].isupper() dls = ImageDataLoaders.from_name_func( - path, get_image_files(path), valid_pct=0.2, seed=42, - label_func=is_cat, item_tfms=Resize(224)) + path, get_image_files(path), valid_pct=0.2, seed=42, + label_func=is_cat, item_tfms=Resize(224)) learn = cnn_learner(dls, resnet34, metrics=error_rate, pretrained=True) learn.fine_tune(epochs=1) learn.show_results(max_n=6, figsize=(7,8)) ``` + - **Image**DataLoaders: First part of name indicate type of data eq image or text - - **Tabular**DataLoaders, **Text**DataLoaders or **Segmentation**DataLoaders + - **Tabular**DataLoaders, **Text**DataLoaders or **Segmentation**DataLoaders - **valid_pct**=0.2 size **validation set** (==developement set) used to **measure the accuracy of the model**. Remaining is the **training set** - **seed**=42 random seed to thesame value every time we run this code, which means we **get the same validation set** every time we run it. @@ -54,26 +55,28 @@ Hiring third party, then use a good test set for mitigating risks *Many methods to avoid overfitting, but use them only when confirmed that overfitting is occurring.* - learn = **cnn_learner**(dls, resnet34, metrics=error_rate): convolutional neural network (CNN) - - dls: the data - - resnet34: 34 in resnet34 refers to the number of layers in the resnet - Models using architectures with more layers take longer to train and are more prone to overfitting, when using more data, they can be quite a bit more accurate. - - metric: measures the quality of the model’s predictions using the validation set, and will be printed at the end of each epoch. + - dls: the data + - resnet34: 34 in resnet34 refers to the number of layers in the resnet + Models using architectures with more layers take longer to train and are more prone to overfitting, when using more data, they can be quite a bit more accurate. + - metric: measures the quality of the model’s predictions using the validation set, and will be printed at the end of each epoch. ![7e3dd277a7935550cabb842d0b3e91ef.png](../../_resources/7e3dd277a7935550cabb842d0b3e91ef.png) - - error_rate: what percentage of images in the validation set are being classified incorrectly - - accuracy = 1.0 - error_rate - - parameter **pretrained** defaults to True (*transfer learning*). When using a pretrained model, cnn_learner will remove the last layer and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. This last part of the model is known as the **head**. + - error_rate: what percentage of images in the validation set are being classified incorrectly + - accuracy = 1.0 - error_rate + - parameter **pretrained** defaults to True (*transfer learning*). When using a pretrained model, cnn_learner will remove the last layer and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. This last part of the model is known as the **head**. - learn.**fine_tune**(epochs=1) - - adapt a pretrained model for a new dataset - - **fit** method will overwrite all weights of the pretrained model + - adapt a pretrained model for a new dataset + - **fit** method will overwrite all weights of the pretrained model - learn.**show_results**(max_n=6, figsize=(7,8)) Computer vision datasets are normally structured in such a way that the **label** for an image is part of the filename or path - most commonly the parent folder name. A **Transform** contains code that is applied automatically during training: + - **item_tfms**: applied to each item eq item is resized to a 224-pixel square - **batch_tfms**: applied to a batch of items at a time using the GPU ## upload a picture + ```python from fastbook import * # requiered for widgets uploader = widgets.FileUpload() @@ -81,6 +84,7 @@ uploader ``` ## test + ```python img = PILImage.create(uploader.data[0]) is_cat,_,probs = learn.predict(img) diff --git a/TimeSeries/Intro.md b/TimeSeries/Intro.md index e69de29..7c71db3 100644 --- a/TimeSeries/Intro.md +++ b/TimeSeries/Intro.md @@ -0,0 +1,36 @@ +# Time series + +> A time series is a set of data points ordered in time. +The data is equally spaced in time, meaning that it was recorded at every hour, minute, month, or quarter. Time series are indexed by time, and that order must be kept + +Decomposition is defined as a statistical task that separates a time series into its different components: + +![Trend](../_resources/Trend.png) +The **trend** is defined +as the slow-moving changes in a time series. +If you observe seasonality, then a **SARIMA model** would be relevant, because this +model uses seasonal effects to produce forecasts + +![Seasonality](../_resources/Seasonallity.png) + The **seasonal** component captures the seasonal variation, which is a cycle that occurs over a fixed period of time. + +**Residuals**, which is what cannot be +explained by either the trend or the seasonal components. Residuals usually correspond to random errors, also termed **white noise** + +![Seasonal and Cyclical](../_resources/Seasonal_and_cyclical.png) + +![Seasonal and Trend](../_resources/Seasonallity_trend.png) + +![Variation](../_resources/Variation.png) + +![Bird’s-eye view of time series forecasting](../_resources/ForcastingProjectRoadmap.png) + +> The only way to know that a model is good, or performant, is to compare it to a baseline. The baseline model is the simplest solution you can +think of—it should not require any training, and the cost of implementation should be very low. +A **baseline model** is a trivial solution to your forecasting problem. It relies on heuristics or simple statistics and is usually the simplest solution. It does not require +model fitting, and it is easy to implement. + +For example: + +- arithmetic mean +- if we see a cyclical pattern in our data, we can simply repeat that pattern into the future. diff --git a/_resources/ForcastingProjectRoadmap.png b/_resources/ForcastingProjectRoadmap.png new file mode 100644 index 0000000..ff7f627 Binary files /dev/null and b/_resources/ForcastingProjectRoadmap.png differ diff --git a/_resources/Seasonal_and_cyclical.png b/_resources/Seasonal_and_cyclical.png new file mode 100644 index 0000000..75cd4cb Binary files /dev/null and b/_resources/Seasonal_and_cyclical.png differ diff --git a/_resources/Seasonallity.png b/_resources/Seasonallity.png new file mode 100644 index 0000000..8ca02c9 Binary files /dev/null and b/_resources/Seasonallity.png differ diff --git a/_resources/Seasonallity_trend.png b/_resources/Seasonallity_trend.png new file mode 100644 index 0000000..2755f53 Binary files /dev/null and b/_resources/Seasonallity_trend.png differ diff --git a/_resources/Trend.png b/_resources/Trend.png new file mode 100644 index 0000000..a24e2a9 Binary files /dev/null and b/_resources/Trend.png differ diff --git a/_resources/Variation.png b/_resources/Variation.png new file mode 100644 index 0000000..36ef959 Binary files /dev/null and b/_resources/Variation.png differ